markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Let's look again at a summary of the dataset. Note that we only see numeric columns, so plurality does not show up. | train_df.describe() | courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Write to .csv files
In the final versions, we want to read from files, not Pandas dataframes. So, we write the Pandas dataframes out as csv files. Using csv files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers. | # Define columns
columns = ["weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks"]
# Write out CSV files
train_df.to_csv(
path_or_buf="train.csv", columns=columns, header=False, index=False)
eval_df.to_csv(
path_or_buf="eval.csv", columns=columns, header=False, index=False)
test_df.to_csv(
path_or_buf="test.csv", columns=columns, header=False, index=False)
%%bash
wc -l *.csv
%%bash
head *.csv
%%bash
tail *.csv | courses/machine_learning/deepdive2/end_to_end_ml/labs/sample_babyweight.ipynb | GoogleCloudPlatform/training-data-analyst | apache-2.0 |
Wiki links from obo descriptions | import wiki
lst = wiki.get_links_from_ontology(ontology)
print r'example:{:}'.format(repr(lst[10])) | wiki_pubmed_fuzzy/.ipynb_checkpoints/Wiki-PubMed-Fuzzy-checkpoint.ipynb | elliekinz/Disease-ontology | apache-2.0 |
urllib2 to read page in html | page = wiki.get_html(lst[101])
page[:1000] | wiki_pubmed_fuzzy/.ipynb_checkpoints/Wiki-PubMed-Fuzzy-checkpoint.ipynb | elliekinz/Disease-ontology | apache-2.0 |
Fuzzy logic | import fuzzywuzzy.process as fuzzy_process
from fuzzywuzzy import fuzz
string = "ventricular arrhythmia"
names = np.sort(name2doid.keys())
print fuzzy_process.extractOne(string, names, scorer=fuzz.token_set_ratio)
string = "Complete remission of hairy cell leukemia variant (HCL-v) complicated by red cell aplasia post treatment with rituximab."
print fuzzy_process.extractOne(string, names, scorer=fuzz.partial_ratio) | wiki_pubmed_fuzzy/.ipynb_checkpoints/Wiki-PubMed-Fuzzy-checkpoint.ipynb | elliekinz/Disease-ontology | apache-2.0 |
Wikipedia search engine: headers | query = "ventricular arrhythmia"
top = wiki.get_top_headers(query)
top
for header in top:
results = fuzzy_process.extractOne(header, names, scorer=fuzz.token_set_ratio)
print results
page = wikipedia.WikipediaPage(title='Cell_proliferation')
page.summary | wiki_pubmed_fuzzy/.ipynb_checkpoints/Wiki-PubMed-Fuzzy-checkpoint.ipynb | elliekinz/Disease-ontology | apache-2.0 |
[name for name in names if len(re.split(' ', name)) > 3]
pub-med | import pubmed
query = 'hcl-v'
titles = pubmed.get(query)
titles_len = [len(title) for title in titles]
for i, string in enumerate(titles):
print("%d) %s" % (i+1, string))
print fuzzy_process.extractOne(string, names, scorer=fuzz.partial_ratio)
print | wiki_pubmed_fuzzy/.ipynb_checkpoints/Wiki-PubMed-Fuzzy-checkpoint.ipynb | elliekinz/Disease-ontology | apache-2.0 |
def find_synonym(s_ref, s):
last = s_ref.find('(' + s + ')')
if last == -1:
return None
n_upper = len(''.join([c for c in s if c.isupper()]))
first = [(i,c) for i, c in enumerate(s_ref[:last]) if c.isupper()][-n_upper][0]
return s_ref[first:last-1]
print find_synonym('Wolff-Parkinson-White syndrome (WPW) and athletes: Darwin at play?',
'WPW')
synonyms | import utils
print utils.find_synonym('Wolff-Parkinson-White syndrome (WPW) and athletes: Darwin at play?', 'WPW')
print utils.find_synonym('Complete remission of hairy cell leukemia variant (HCL-v)...', 'hcl-v') | wiki_pubmed_fuzzy/.ipynb_checkpoints/Wiki-PubMed-Fuzzy-checkpoint.ipynb | elliekinz/Disease-ontology | apache-2.0 |
Assymetric distance | s_ref = 'artery disease'
s = 'nonartery'
print utils.assym_dist(s, s_ref) | wiki_pubmed_fuzzy/.ipynb_checkpoints/Wiki-PubMed-Fuzzy-checkpoint.ipynb | elliekinz/Disease-ontology | apache-2.0 |
Length statistics | print 'Mean term name length:', np.mean([len(term.name) for term in ontology.get_terms()])
print 'Mean article title length:', np.mean(titles_len) | wiki_pubmed_fuzzy/.ipynb_checkpoints/Wiki-PubMed-Fuzzy-checkpoint.ipynb | elliekinz/Disease-ontology | apache-2.0 |
Unique words | words = [re.split(' |-', term.name) for term in ontology.get_terms()]
words = np.unique([l for sublist in words for l in sublist if len(l) > 0])
words = [w for w in words if len(w) >= 4]
words[:10] | wiki_pubmed_fuzzy/.ipynb_checkpoints/Wiki-PubMed-Fuzzy-checkpoint.ipynb | elliekinz/Disease-ontology | apache-2.0 |
Threading | from threading import Thread
from time import sleep
from ontology import get_ontology
query_results = None
def fn_get_q(query):
global query_results
query_results = fuzzy_process.extractOne(query, names, scorer=fuzz.ratio)
return True
wiki_results = None
def fn_get_wiki(query):
global wiki_results
header = wiki.get_top_headers(query, 1)[0]
wiki_results = fuzzy_process.extractOne(header, names, scorer=fuzz.ratio)
#sleep(0.1)
return True
pubmed_results = None
def fn_get_pubmed(query):
global pubmed_results
string = pubmed.get(query, topK=1)
if string is not None:
string = string[0]
print string
pubmed_results = fuzzy_process.extractOne(string, names, scorer=fuzz.partial_ratio)
return True
else:
return False
'''main'''
## from bot
query = 'valve disease'
def find_answer(query):
query = query.lower()
# load ontology
ontology = get_ontology('../data/doid.obo')
name2doid = {term.name: term.id for term in ontology.get_terms()}
doid2name = {term.id: term.name for term in ontology.get_terms()}
## exact match
if query in name2doid.keys():
doid = name2doid[query]
else:
# exact match -- no
th_get_q = Thread(target = fn_get_q, args = (query,))
th_get_wiki = Thread(target = fn_get_wiki, args = (query,))
th_get_pubmed = Thread(target = fn_get_pubmed, args = (query,))
th_get_q.start()
th_get_wiki.start()
th_get_pubmed.start()
## search engine query --> vertices, p=100(NLP??); synonyms
## new thread for synonyms???
## synonyms NLP
## new thread for NLP
## tree search on vertices (returned + synonyms)
## sleep ?
th_get_q.join()
print query_results
th_get_wiki.join()
print wiki_results
th_get_pubmed.join()
print pubmed_results
## final answer
## draw graph
doid = None
graph = None
return doid, graph
| wiki_pubmed_fuzzy/.ipynb_checkpoints/Wiki-PubMed-Fuzzy-checkpoint.ipynb | elliekinz/Disease-ontology | apache-2.0 |
We'll train a logistic regression model of the form
$$
p(y = 1 ~|~ {\bf x}; {\bf w}) = \frac{1}{1 + \textrm{exp}[-(w_0 + w_1x_1 + w_2x_2)]}
$$
using sklearn's logistic regression classifier as follows | from sklearn.linear_model import LogisticRegression # import from sklearn
logreg = LogisticRegression() # initialize classifier
logreg.fit(X_train, y_train); # train on training data | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Q: Determine the parameters ${\bf w}$ fit by the model. It might be helpful to consult the documentation for the classifier on the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html">sklearn website</a>. Hint: The classifier stores the coefficients and bias term separately.
Q: In general, what does the Logistic Regression decision boundary look like for data with two features?
Q: Modify the code below to plot the decision boundary along with the data. | import numpy as np
import math
fig = plt.figure(figsize=(8,8))
plt.scatter(X_train[:, 0], X_train[:, 1], s=100, c=[mycolors["red"] if yi==1 else mycolors["blue"] for yi in y_train])
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
x_min, x_max = np.min(X_train[:,0])-0.1, np.max(X_train[:,0])+0.1
y_min, y_max = np.min(X_train[:,1])-0.1, np.max(X_train[:,1])+0.1
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
x1 = np.linspace(x_min, x_max, 100)
w0 = logreg.intercept_
w1 = logreg.coef_[0][0]
w2 = logreg.coef_[0][1]
x2 = (-w0 - w1*x1)/w2#TODO
plt.plot(x1, x2, color="gray"); | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Problem 2: The Bag-of-Words Text Model
The remainder of today's exercise will consider the problem of predicting the semantics of text. In particular, later we'll look at predicting whether movie reviews are positive or negative just based on their text.
Before we can utilize text as features in a learning model, we need a concise mathematical way to represent things like words, phrases, sentences, etc. The most common text models are based on the so-called <a href="https://en.wikipedia.org/wiki/Vector_space_model">Vector Space Model</a> (VSM) where individual words in a document are associated with entries of a vector:
$$
\textrm{"The sky is blue"} \quad \Rightarrow \quad
\left[
\begin{array}{c}
0 \
1 \
0 \
0 \
1
\end{array}
\right]
$$
The first step in creating a VSM is to define a vocabulary, $V$, of words that you will include in your model. This vocabulary can be determined by looking at all (or most) of the words in the training set, or even by including a fixed vocabulary based on the english language. A vector representation of a document like a movie review is then a vector with length $|V|$ where each entry in the vector maps uniquely to a word in the vocabulary. A vector encoding of a document would then be a vector that is nonzero in positions corresponding to words present in the document and zero everywhere else. How you fill in the nonzero entries depends on the model you're using. Two simple conventions are the Bag-of-Words model and the binary model.
In the binary model we simply set an entry of the vector to $1$ if the associate word appears at least once in the document. In the more common Bag-of-Words model we set an entry of the vector equal to the frequency with which the word appears in the document. Let's see if we can come up with a simple implementation of the Bag-of-Words model in Python, and then later we'll see how sklearn can do the heavy lifting for us.
Consider a training set containing three documents, specified as follows
$\texttt{Training Set}:$
$\texttt{d1}: \texttt{new york times}$
$\texttt{d2}: \texttt{new york post}$
$\texttt{d3}: \texttt{los angeles times}$
First we'll define the vocabulary based on the words in the test set. It is $V = { \texttt{angeles}, \texttt{los}, \texttt{new}, \texttt{post}, \texttt{times}, \texttt{york}}$.
We need to define an association between the particular words in the vocabulary and the specific entries in our vectors. Let's define this association in the order that we've listed them above. We can store this mapping as a Python dictionary as follows: | V = {"angeles": 0, "los": 1, "new": 2, "post": 3, "times": 4, "york": 5} | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Let's also store the documents in a list as follows: | D = ["the new york times", "the new york post", "the los angeles times"] | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
To be consistent with sklearn conventions, we'll encode the documents as row-vectors stored in a matrix. In this case, each row of the matrix corresponds to a document, and each column corresponds to a term in the vocabulary. For our example this gives us a matrix $M$ of shape $3 \times 6$. The $(d,t)$-entry in $M$ is then the number of times the term $t$ appears in document $d$
Q: Your first task is to write some simple Python code to construct the term-frequency matrix $M$ | M = np.zeros((len(D),len(V)))
for ii, doc in enumerate(D):
for term in doc.split():
if(term in V): #only print if the term is in our dictionary
M[ii,V[term]] += 1 #TODO
print(M) | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Hopefully your code returns the matrix
$$M =
\left[
\begin{array}{ccccccc}
0 & 0 & 1 & 0 & 1 & 1 \
0 & 0 & 1 & 1 & 0 & 1 \
1 & 1 & 0 & 0 & 1 & 0 \
\end{array}
\right]$$.
Note that the entry in the (2,0) position is $1$ because the first word (angeles) appears once in the third document.
OK, let's see how we can construct the same term-frequency matrix in sklearn. We will use something called the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html">CountVectorizer</a> to accomplish this. Let's see some code and then we'll explain how it functions.
To avoid common words, such as "the", in our analysis, we will remove any word from a list of common english words in our analysis. We can do so by typing
stop_words = 'english'
in the CountVectorizer call. | from sklearn.metrics.pairwise import euclidean_distances
from sklearn.feature_extraction.text import CountVectorizer # import CountVectorizer
vectorizer = CountVectorizer(stop_words = 'english') # initialize the vectorizer
X = vectorizer.fit_transform(D,) # fit to training data and transform to matrix | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
The $\texttt{fit_transform}$ method actually does two things. It fits the model to the training data by building a vocabulary. It then transforms the text in $D$ into matrix form.
If we wish to see the vocabulary you can do it like so | print(vectorizer.vocabulary_) | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Note that this is the same vocabulary and indexing that we definfed ourselves (just in a different order). Hopefully that means we'll get the same term-frequency matrix. We can print $X$ and check | print(X.todense()) | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Yep, they're the same! Notice that we had to convert $X$ to a dense matrix for printing. This is because CountVectorizer actually returns a sparse matrix. This is a very good thing since most vectors in a text model will be extremely sparse, since most documents will only contain a handful of words from the vocabulary.
OK, let's see how we can use the CountVectorizer to transform the test documents into their own term-frequency matrix. | #get a sense of how different the vectors are
for f in X:
print(euclidean_distances(X[0],f))
| notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
OK, now suppose that we have a query document not included in the training set that we want to vectorize. | d4 = ["new york new tribune"] | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
We've already fit the CountVectorizer to the training set, so all we need to do is transform the test set documents into a term-frequency vector using the same conventions. Since we've already fit the model, we do the transformation with the $\texttt{transform}$ method: | x4 = vectorizer.transform(d4) | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Let's print it and see what it looks like | print(x4.todense()) | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Notice that the query document included the word $\texttt{new}$ twice, which corresponds to the entry in the $(0,2)$-position.
Q: What's missing from $x4$ that we might expect to see from the query document?
<br>
Problem 3: Term Frequency - Inverse Document Frequency
The Bag-of-Words model for text classification is very popular, but let's see if we can do better. Currently we're weighting every word in the corpus by it's frequency. It turns out that in text classification there are often features that are not particularly useful predictors for the document class, either because they are too common or too uncommon. Stop-words are extremely common, low-information words like "a", "the", "as", etc. Removing these from documents is typically the first thing done in peparing data for document classification.
Q: Can you think of a situation where it might be useful to keep stop words in the corpus?
Other words that tend to be uninformative predictors are words that appear very very rarely. In particular, if they do not appear frequently enough in the training data then it is difficult for a classification algorithm to weight them heavily in the classification process.
In general, the words that tend to be useful predictors are the words that appear frequently, but not too frequently. Consider the following frequency graph for a corpus.
<img src="figs/feat_freq.png",width=400,height=50>
The features in column A appear too frequently to be very useful, and the features in column C appear too rarely. One first-pass method of feature selection in text classification would be to discard the words from columns A and C, and build a classifier with only features from column B.
Another common model for identifying the useful terms in a document is the Term Frequency - Inverse Document Frequency (tf-idf) model. Here we won't throw away any terms, but we'll replace their Bag-of-Words frequency counts with tf-idf scores which we describe below.
The tf-idf score is the product of two statistics, term frequency and inverse document frequency
$$\texttt{tfidf(d,t)} = \texttt{tf(d,t)} \times \texttt{idf(t)}$$
The term frequency $\texttt{tf(d,t)}$ is a measure of the frequency with which term $t$ appears in document $d$. The inverse document frequency $\texttt{idf(t)}$ is a measure of how much information the word provides, that is, whether the term is common or rare across all documents. By multiplying the two quantities together, we obtain a representation of term $t$ in document $d$ that weighs how common the term is in the document with how common the word is in the entire corpus. You can imagine that the words that get the highest associated values are terms that appear many times in a small number of documents.
There are many ways to compute the composite terms $\texttt{tf}$ and $\texttt{idf}$. For simplicity, we'll define $\texttt{tf(d,t)}$ to be the number of times term $t$ appears in document $d$ (i.e., Bag-of-Words). We will define the inverse document frequency as follows:
$$
\texttt{idf(t)} = \ln ~ \frac{\textrm{total # documents}}{\textrm{1 + # documents with term }t}
= \ln ~ \frac{|D|}{|d: ~ t \in d |}
$$
Note that we could have a potential problem if a term comes up that is not in any of the training documents, resulting in a divide by zero. This might happen if you use a canned vocabulary instead of constructing one from the training documents. To guard against this, many implementations will use add-one smoothing in the denominator (this is what sklearn does).
$$
\texttt{idf(t)} = \ln ~ \frac{\textrm{total # documents}}{\textrm{1 + # documents with term }t}
= \ln ~ \frac{|D|}{1 + |d: ~ t \in d |}
$$
Q: Compute $\texttt{idf(t)}$ (without smoothing) for each of the terms in the training documents from the previous problem
Q: Compute the td-ifd matrix for the training set | idf = np.array([np.log(3), np.log(3), np.log(3./2), np.log(3), np.log(3./2), np.log(3./2)])
Xtfidf = np.dot(X.todense(), np.diag(idf)) | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Hopefully you got something like the following:
$$
X_{tfidf} =
\left[
\begin{array}{ccccccccc}
0. & 0. & 0.40546511 & 0. & 0.40546511 & 0.40546511 \
0. & 0. & 0.40546511 & 1.09861229 & 0. & 0.40546511 \
1.09861229 & 1.09861229 & 0. & 0. & 0.40546511 & 0.
\end{array}
\right]
$$
The final step in any VSM method is the normalization of the vectors. This is done so that very long documents to not completely overpower the small and medium length documents. | row_norms = np.array([np.linalg.norm(row) for row in Xtfidf])
X_tfidf_n = np.dot(np.diag(1./row_norms), Xtfidf)
print(X_tfidf_n) | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Let's see what we get when we use sklearn. Sklearn has a vectorizer called TfidfVectorizer which is similar to CountVectorizer, but it computes tf-idf scores. | from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer()
Y = tfidf.fit_transform(D)
print(Y.todense()) | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Note that these are not quite the same, because sklearn's implementation of tf-idf uses the add-one smoothing in the denominator for idf.
Okay, now let's see if we can use TFIDF analysis on real text documents!
Run the following code to use this analysis on his inauguration speech from 2009. It will output what TFIDF thinks are the most important words from each paragraph
Q: Is the analysis able to pick out the most important words correctly? Why does it sometimes pick the wrong words?
Q: You can do the same analysis for his 2012 State of the Union Speech by replacing the first line of code with "obama_SOU_2012.txt". How does the analysis do here?
Q: Find some other piece of text on your own and do the same analysis here by saving it in .txt file and entering the name of this file in the first line of code. You can find a big source of speeches http://www.americanrhetoric.com/newtop100speeches.htm. | #load in text
ObamaText = open("data/obama_SOU_2012.txt").readlines()
#create TFIDF matrix
X = vectorizer.fit_transform(ObamaText)
D_tot = X.shape[0]
Xtfidf = np.zeros(X.shape)
for i,col in enumerate(X.T): #loop over rows of X (i.e. paragraphs of text)
#number of lines the word appears in (no need for smoothing here)
freq = np.count_nonzero(col.todense())
#compute theidf
idf = math.log(D_tot/(freq))
#calculate the tf-idf
Xtfidf[:,i:i+1] = X[:,i].todense()*idf
#normalize Xtfidf matrix
row_norms = np.array([np.linalg.norm(row) for row in Xtfidf])
Xtfidf_norm = np.dot(np.diag(1./row_norms),Xtfidf)
#create a list from the dictionary
V_words, V_nums = vectorizer.vocabulary_.keys(), vectorizer.vocabulary_.values()
V_reverse = zip(V_nums,V_words)
V_reverse_dict = dict(V_reverse)
#loop through the paragraphs of the text and print most important word
for i,row in enumerate(Xtfidf_norm):
row_str = " "
row_str = row_str + V_reverse_dict[np.argmax(row)]
#top_words_ind = np.argsort(row)[-5:]
#for ii in top_words_ind:
# row_str = row_str + V_reverse_dict[ii] + " "
print("The top word in paragraph " + str(i) + " is " + row_str)
| notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
<br>
Problem 4: Classifying Semantics in Movie Reviews
The data for this problem was taken from the <a href="https://www.kaggle.com/c/word2vec-nlp-tutorial/details/part-1-for-beginners-bag-of-words">Bag of Words Meets Bag of Popcorn</a> Kaggle competition
In this problem you will use the text from movie reviews to predict whether the reviewer felt positively or negatively about the movie using Bag-of-Words and tf-idf. I've partially cleaned the data and stored it in files called $\texttt{labeledTrainData.tsv}$ and $\texttt{labeledTestData.tsv}$ in the data directory. | import csv
def read_and_clean_data(fname, remove_stops=True):
with open('data/stopwords.txt', 'rt') as f:
stops = [line.rstrip('\n') for line in f]
with open(fname,'rt') as tsvin:
reader = csv.reader(tsvin, delimiter='\t')
labels = []; text = []
for ii, row in enumerate(reader):
labels.append(int(row[0]))
words = row[1].lower().split()
words = [w for w in words if not w in stops] if remove_stops else words
text.append(" ".join(words))
return text, labels
text_train, labels_train = read_and_clean_data('data/labeledTrainData.tsv', remove_stops=True)
text_test, labels_test = read_and_clean_data('data/labeledTestData.tsv', remove_stops=True) | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
The current parameters are set to not remove stop words from the text so that it's a bit easier to explore.
Look at a few of the reviews stored in $\texttt{text_train}$ as well as their associated labels in $\texttt{labels_train}$. Can you figure out which label refers to a positive review and which refers to a negative review? | labels_train[:4] | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
The first review is labeled $1$ and has the following text: | text_train[1] | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
The fourth review is labeled $0$ and has the following text: | text_train[0] | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Hopefully it's obvious that label 1 corresponds to positive reviews and label 0 to negative reviews!
OK, the first thing we'll do is train a logistic regression classifier using the Bag-of-Words model, and see what kind of accuracy we can get. To get started, we need to vectorize the text into mathematical features that we can use. We'll use CountVectorizer to do the job. (Before starting, I'm going to reload the data and remove the stop words this time) | text_train, labels_train = read_and_clean_data('data/labeledTrainData.tsv', remove_stops=True)
text_test, labels_test = read_and_clean_data('data/labeledTestData.tsv', remove_stops=True)
cvec = CountVectorizer()
X_bw_train = cvec.fit_transform(text_train)
y_train = np.array(labels_train)
X_bw_test = cvec.transform(text_test)
y_test = np.array(labels_test) | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Q: How many different words are in the vocabulary?
OK, now we'll train a logistic regression classifier on the training set, and test the accuracy on the test set. To do this we'll need to load some kind of accuracy metric from sklearn. | from sklearn.metrics import accuracy_score
bwLR = LogisticRegression()
bwLR.fit(X_bw_train, y_train)
pred_bwLR = bwLR.predict(X_bw_test)
print("Logistic Regression accuracy with Bag-of-Words: " + str(accuracy_score(y_test, pred_bwLR))) | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
OK, so we got an accuracy of around 81% using Bag-of-Words. Now lets do the same tests but this time with tf-idf features. | tvec = TfidfVectorizer()
X_tf_train = tvec.fit_transform(text_train)
X_tf_test = tvec.transform(text_test)
tfLR = LogisticRegression()
tfLR.fit(X_tf_train, y_train)
pred_tfLR = tfLR.predict(X_tf_test)
print("Logistic Regression accuracy with tf-idf: " + str(accuracy_score(y_test, pred_tfLR))) | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
WOOHOO! With tf-idf features we got around 85% accuracy, which is a 4% improvement. (If you're scoffing at this, wait until you get some more experience working with real-world data. 4% improvement is pretty awesome).
Q: Which words are the strongest predictors for a positive review and which words are the strongest predictors for negative reviews? I'm not going to give you the answer to this one because it's the same question we'll ask on the next homework assignment. But if you figure this out you'll have a great head start!
<br><br><br>
<br><br><br>
<br><br><br>
<br><br><br>
<br><br><br>
<br><br><br>
<br><br><br>
<br><br><br>
<br><br><br>
<br><br><br>
<br><br><br>
<br><br><br>
<br><br><br>
<br><br><br>
<br><br><br>
<br><br><br>
Notebook Solutions
<br><br><br>
Problem 1: Logistic Regression for 2D Continuous Features
In the video lecture you saw some examples of using logistic regression to do binary classification on text data (SPAM vs HAM) and on 1D continuous data. In this problem we'll look at logistic regression for 2D continuous data. The data we'll use are <a href="https://www.math.umd.edu/~petersd/666/html/iris_with_labels.jpg">sepal</a> measurements from the ubiquitous iris dataset.
<!---
<img style="float:left; width:450px" src="https://upload.wikimedia.org/wikipedia/commons/9/9f/Iris_virginica.jpg",width=300,height=50>
-->
<img style="float:left; width:450px" src="http://www.twofrog.com/images/iris38a.jpg",width=300,height=50>
<!---
<img style="float:right; width:490px" src="https://upload.wikimedia.org/wikipedia/commons/4/41/Iris_versicolor_3.jpg",width=300,height=50>
-->
<img style="float:right; width:490px" src="http://blazingstargardens.com/wp-content/uploads/2016/02/Iris-versicolor-Blue-Flag-Iris1.jpg",width=300,height=62>
The two features of our model will be the sepal length and sepal width. Execute the following cell to see a plot of the data. The blue points correspond to the sepal measurements of the Iris Setosa (left) and the red points correspond to the sepal measurements of the Iris Versicolour (right). | import matplotlib.pyplot as plt
%matplotlib inline
from sklearn import datasets
iris = datasets.load_iris()
X_train = iris.data[iris.target != 2, :2] # first two features and
y_train = iris.target[iris.target != 2] # first two labels only
fig = plt.figure(figsize=(8,8))
mycolors = {"blue": "steelblue", "red": "#a76c6e", "green": "#6a9373"}
plt.scatter(X_train[:, 0], X_train[:, 1], s=100, alpha=0.9, c=[mycolors["red"] if yi==1 else mycolors["blue"] for yi in y_train])
plt.xlabel('sepal length', fontsize=16)
plt.ylabel('sepal width', fontsize=16); | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Q: Determine the parameters ${\bf w}$ fit by the model. It might be helpful to consult the documentation for the classifier on the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html">sklearn website</a>. Hint: The classifier stores the coefficients and bias term separately.
A: The bias term is stored in logreg.intercept_ . The remaining coefficients are stored in logreg.coef_ . For this problem we have
$$
w_0 =-0.599, \quad w_1 = 2.217, \quad \textrm{and} \quad w_2 = -3.692
$$
Q: In general, what does the Logistic Regression decision boundary look like for data with two features?
A: The decision boundary for Logistic Regresion for data with two features is a line. To see this, remember that the decision boundary is made up of $(x_1, x_2)$ points such that $\textrm{sigm}({\bf w}^T{\bf x}) = 0.5$. We then have
$$
\frac{1}{1 + \textrm{exp}[-(w_0 + w_1x_1 + w_2x_2)]} = \frac{1}{2} ~~\Rightarrow ~~ w_0 + w_1x_1 + w_2x_2 = 0 ~~\Rightarrow~~ x_2 = -\frac{w_1}{w_2}x_1 - \frac{w_0}{w_2}
$$
So the decision boundary is a line with slope $-w_1/w_2$ and intercept $-w_0/w_2$.
Q: Modify the code below to plot the decision boundary along with the data. | import numpy as np
fig = plt.figure(figsize=(8,8))
plt.scatter(X_train[:, 0], X_train[:, 1], s=100, c=[mycolors["red"] if yi==1 else mycolors["blue"] for yi in y_train])
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
x_min, x_max = np.min(X_train[:,0])-0.1, np.max(X_train[:,0])+0.1
y_min, y_max = np.min(X_train[:,1])-0.1, np.max(X_train[:,1])+0.1
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
x1 = np.linspace(x_min, x_max, 100)
w0 = logreg.intercept_
w1 = logreg.coef_[0][0]
w2 = logreg.coef_[0][1]
x2 = -(w0/w2) - (w1/w2)*x1 #TODO
plt.plot(x1, x2, color="gray"); | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Let's also store the documents in a list as follows: | D = ["new york times", "new york post", "los angeles times"] | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
To be consistent with sklearn conventions, we'll encode the documents as row-vectors stored in a matrix. In this case, each row of the matrix corresponds to a document, and each column corresponds to a term in the vocabulary. For our example this gives us a matrix $M$ of shape $3 \times 6$. The $(d,t)$-entry in $M$ is then the number of times the term $t$ appears in document $d$
Q: Your first task is to write some simple Python code to construct the term-frequency matrix $M$ | M = np.zeros((len(D),len(V)))
for ii, doc in enumerate(D):
for term in doc.split():
M[ii, V[term]] += 1
print(M) | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Hopefully your code returns the matrix
$$M =
\left[
\begin{array}{ccccccc}
0 & 0 & 1 & 0 & 1 & 1 \
0 & 0 & 1 & 1 & 0 & 1 \
1 & 1 & 0 & 0 & 1 & 0 \
\end{array}
\right]$$.
Note that the entry in the (2,0) position is $1$ because the first word (angeles) appears once in the third document.
OK, let's see how we can construct the same term-frequency matrix in sklearn. We will use something called the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html">CountVectorizer</a> to accomplish this. Let's see some code and then we'll explain how it functions. | from sklearn.feature_extraction.text import CountVectorizer # import CountVectorizer
vectorizer = CountVectorizer() # initialize the vectorizer
X = vectorizer.fit_transform(D) # fit to training data and transform to matrix | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Note that this is the same vocabulary and indexing that we definfed ourselves. Hopefully that means we'll get the same term-frequency matrix. We can print $X$ and check | print(X.todense()) | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Yep, they're the same! Notice that we had to convert $X$ to a dense matrix for printing. This is because CountVectorizer actually returns a sparse matrix. This is a very good thing since most vectors in a text model will be extremely sparse, since most documents will only contain a handful of words from the vocabulary.
OK, now suppose that we have a query document not included in the training set that we want to vectorize. | d4 = ["new york new tribune"] | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Notice that the query document included the word $\texttt{new}$ twice, which corresponds to the entry in the $(0,2)$-position.
Q: What's missing from $x4$ that we might expect to see from the query document?
A: The word $\texttt{tribune}$ do not appear in vector $x4$ at all. This is because it did not occur in the training set, which means it is not present in the VSM vocabulary. This should not bother us too much. Most reasonable text data sets will have most of the important words present in the training set and thus in the vocabulary. On the other hand, the throw-away words that are present only in the test set are probably useless anyway, since the learning model is trained based on the text in the training set, and thus won't be able to do anything intelligent with words the model hasn't seen yet.
<br>
Problem 3: Term Frequency - Inverse Document Frequency
The Bag-of-Words model for text classification is very popular, but let's see if we can do better. Currently we're weighting every word in the corpus by it's frequency. It turns out that in text classification there are often features that are not particularly useful predictors for the document class, either because they are too common or too uncommon. Stop-words are extremely common, low-information words like "a", "the", "as", etc. Removing these from documents is typically the first thing done in peparing data for document classification.
Q: Can you think of a situation where it might be useful to keep stop words in the corpus?
A: If you plan to use bi-grams or tri-grams as features. Bi-grams are pairs of words that appear side-by-side in a document, e.g. "he went", "went to", "to the", "the store".
Other words that tend to be uninformative predictors are words that appear very very rarely. In particular, if they do not appear frequently enough in the training data then it is difficult for a classification algorithm to weight them heavily in the classification process.
In general, the words that tend to be useful predictors are the words that appear frequently, but not too frequently. Consider the following frequency graph for a corpus.
<img src="figs/feat_freq.png",width=400,height=50>
The features in column A appear too frequently to be very useful, and the features in column C appear too rarely. One first-pass method of feature selection in text classification would be to discard the words from columns A and C, and build a classifier with only features from column B.
Another common model for identifying the useful terms in a document is the Term Frequency - Inverse Document Frequency (tf-idf) model. Here we won't throw away any terms, but we'll replace their Bag-of-Words frequency counts with tf-idf scores which we describe below.
The tf-idf score is the product of two statistics, term frequency and inverse document frequency
$$\texttt{tfidf(d,t)} = \texttt{tf(d,t)} \times \texttt{idf(t)}$$
The term frequency $\texttt{tf(d,t)}$ is a measure of the frequency with which term $t$ appears in document $d$. The inverse document frequency $\texttt{idf(t)}$ is a measure of how much information the word provides, that is, whether the term is common or rare across all documents. By multiplying the two quantities together, we obtain a representation of term $t$ in document $d$ that weighs how common the term is in the document with how common the word is in the entire corpus. You can imagine that the words that get the highest associated values are terms that appear many times in a small number of documents.
There are many ways to compute the composite terms $\texttt{tf}$ and $\texttt{idf}$. For simplicity, we'll define $\texttt{tf(d,t)}$ to be the number of times term $t$ appears in document $d$ (i.e., Bag-of-Words). We will define the inverse document frequency as follows:
$$
\texttt{idf(t)} = \ln ~ \frac{\textrm{total # documents}}{\textrm{# documents with term }t}
= \ln ~ \frac{|D|}{|d: ~ t \in d |}
$$
Note that we could have a potential problem if a term comes up that is not in any of the training documents, resulting in a divide by zero. This might happen if you use a canned vocabulary instead of constructing one from the training documents. To guard against this, many implementations will use add-one smoothing in the denominator (this is what sklearn does).
$$
\texttt{idf(t)} = \ln ~ \frac{\textrm{total # documents}}{\textrm{1 + # documents with term }t}
= \ln ~ \frac{|D|}{1 + |d: ~ t \in d |}
$$
Q: Compute $\texttt{idf(t)}$ (without smoothing) for each of the terms in the training documents from the previous problem
A:
$
\texttt{idf}(\texttt{angeles}) = \ln ~ \frac{3}{1} = \ln ~ \frac{3}{1} = 1.10
$
$
\texttt{idf}(\texttt{los}) = \ln ~ \frac{3}{1} = \ln ~ \frac{3}{1} = 1.10
$
$
\texttt{idf}(\texttt{new}) = \ln ~ \frac{3}{2} = \ln ~ \frac{3}{2} = 0.41
$
$
\texttt{idf}(\texttt{post}) = \ln ~ \frac{3}{1} = \ln ~ \frac{3}{1} = 1.10
$
$
\texttt{idf}(\texttt{times}) = \ln ~ \frac{3}{2} = \ln ~ \frac{3}{2} = 0.41
$
$
\texttt{idf}(\texttt{york}) = \ln ~ \frac{3}{2} = \ln ~ \frac{3}{2} = 0.41
$
Q: Compute the td-ifd matrix for the training set
A: There are several ways to do this. One way would be to multiply the term-frequency matrix on the right with a diagonal matrix with the idf-values on the main diagonal | idf = np.array([np.log(3), np.log(3), np.log(3./2), np.log(3), np.log(3./2), np.log(3./2)])
Xtfidf = np.dot(X.todense(), np.diag(idf))
print(Xtfidf) | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Note that these are not quite the same, becuase sklearn's implementation of tf-idf uses the add-one smoothing in the denominator for idf.
<br>
Problem 4: Classifying Semantics in Movie Reviews
The data for this problem was taken from the <a href="https://www.kaggle.com/c/word2vec-nlp-tutorial/details/part-1-for-beginners-bag-of-words">Bag of Words Meets Bag of Popcorn</a> Kaggle competition
In this problem you will use the text from movie reviews to predict whether the reviewer felt positively or negatively about the movie using Bag-of-Words and tf-idf. I've partially cleaned the data and stored it in files called $\texttt{labeledTrainData.tsv}$ and $\texttt{labeledTestData.tsv}$ in the data directory. | import csv
def read_and_clean_data(fname, remove_stops=True):
with open('data/stopwords.txt', 'r') as f:
stops = [line.rstrip('\n') for line in f]
with open(fname,'r') as tsvin:
reader = csv.reader(tsvin, delimiter='\t')
labels = []; text = []
for ii, row in enumerate(reader):
labels.append(int(row[0]))
words = row[1].lower().split()
words = [w for w in words if not w in stops] if remove_stops else words
text.append(" ".join(words))
return text, labels
text_train, labels_train = read_and_clean_data('data/labeledTrainData.tsv', remove_stops=False)
text_test, labels_test = read_and_clean_data('data/labeledTestData.tsv', remove_stops=False) | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
The current parameters are set to not remove stop words from the text so that it's a bit easier to explore.
Q: Look at a few of the reviews stored in $\texttt{text_train}$ as well as their associated labels in $\texttt{labels_train}$. Can you figure out which label refers to a positive review and which refers to a negative review?
A: | labels_train[:4] | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
The first review is labeled $1$ and has the following text: | text_train[0] | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
The fourth review is labeled $0$ and has the following text: | text_train[3] | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Q: How many different words are in the vocabulary? | X_bw_train.shape | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
A: It looks like around 17,800 distinct words
OK, now we'll train a logistic regression classifier on the training set, and test the accuracy on the test set. To do this we'll need to load some kind of accuracy metric from sklearn. | from sklearn.metrics import accuracy_score
bwLR = LogisticRegression()
bwLR.fit(X_bw_train, y_train)
pred_bwLR = bwLR.predict(X_bw_test)
print("Logistic Regression accuracy with Bag-of-Words: ", accuracy_score(y_test, pred_bwLR)) | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
OK, so we got an accuracy of around 81% using Bag-of-Words. Now lets do the same tests but this time with tf-idf features. | tvec = TfidfVectorizer()
X_tf_train = tvec.fit_transform(text_train)
X_tf_test = tvec.transform(text_test)
tfLR = LogisticRegression()
tfLR.fit(X_tf_train, y_train)
pred_tfLR = tfLR.predict(X_tf_test)
print("Logistic Regression accuracy with tf-idf: ", accuracy_score(y_test, pred_tfLR)) | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
WOOHOO! With tf-idf features we got around 85% accuracy, which is a 4% improvement. (If you're scoffing at this, wait until you get some more experience working with real-world data. 4% improvement is pretty awesome).
Q: Which words are the strongest predictors for a positive review and which words are the strongest predictors for negative reviews? I'm not going to give you the answer to this one because it's the same question we'll ask on the next homework assignment. But if you figure this out you'll have a great head start!
<br><br><br>
<br><br><br>
<br><br><br>
<br><br><br>
<br><br><br> | from IPython.core.display import HTML
HTML("""
<style>
.MathJax nobr>span.math>span{border-left-width:0 !important};
</style>
""") | notebook_solutions/ML_morning_JTN/02_Logistic_Regression_and_Text_Models.ipynb | jamesfolberth/NGC_STEM_camp_AWS | bsd-3-clause |
Complex Numbers
Q1. Return the angle of a in radian. | a = 1+1j
output = ...
print(output) | numpy/numpy_exercises_from_kyubyong/Discrete_Fourier_Transform.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Q2. Return the real part and imaginary part of a. | a = np.array([1+2j, 3+4j, 5+6j])
real = ...
imag = ...
print("real part=", real)
print("imaginary part=", imag) | numpy/numpy_exercises_from_kyubyong/Discrete_Fourier_Transform.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Q3. Replace the real part of a with 9, the imaginary part with [5, 7, 9]. | a = np.array([1+2j, 3+4j, 5+6j])
...
...
print(a) | numpy/numpy_exercises_from_kyubyong/Discrete_Fourier_Transform.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Q4. Return the complex conjugate of a. | a = 1+2j
output = ...
print(output) | numpy/numpy_exercises_from_kyubyong/Discrete_Fourier_Transform.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Discrete Fourier Transform
Q5. Compuete the one-dimensional DFT of a. | a = np.exp(2j * np.pi * np.arange(8))
output = ...
print(output)
| numpy/numpy_exercises_from_kyubyong/Discrete_Fourier_Transform.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Q6. Compute the one-dimensional inverse DFT of the output in the above question. | print("a=", a)
inversed = ...
print("inversed=", a) | numpy/numpy_exercises_from_kyubyong/Discrete_Fourier_Transform.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Q7. Compute the one-dimensional discrete Fourier Transform for real input a. | a = [0, 1, 0, 0]
output = ...
print(output)
assert output.size==len(a)//2+1 if len(a)%2==0 else (len(a)+1)//2
# cf.
output2 = np.fft.fft(a)
print(output2) | numpy/numpy_exercises_from_kyubyong/Discrete_Fourier_Transform.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Q8. Compute the one-dimensional inverse DFT of the output in the above question. | inversed = ...
print("inversed=", a) | numpy/numpy_exercises_from_kyubyong/Discrete_Fourier_Transform.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Q9. Return the DFT sample frequencies of a. | signal = np.array([-2, 8, 6, 4, 1, 0, 3, 5], dtype=np.float32)
fourier = np.fft.fft(signal)
n = signal.size
freq = ...
print(freq) | numpy/numpy_exercises_from_kyubyong/Discrete_Fourier_Transform.ipynb | mohanprasath/Course-Work | gpl-3.0 |
Window Functions | fig = plt.figure(figsize=(19, 10))
# Hamming window
window = np.hamming(51)
plt.plot(np.bartlett(51), label="Bartlett window")
plt.plot(np.blackman(51), label="Blackman window")
plt.plot(np.hamming(51), label="Hamming window")
plt.plot(np.hanning(51), label="Hanning window")
plt.plot(np.kaiser(51, 14), label="Kaiser window")
plt.xlabel("sample")
plt.ylabel("amplitude")
plt.legend()
plt.grid()
plt.show() | numpy/numpy_exercises_from_kyubyong/Discrete_Fourier_Transform.ipynb | mohanprasath/Course-Work | gpl-3.0 |
OT for domain adaptation
This example introduces a domain adaptation in a 2D setting and the 4 OTDA
approaches currently supported in POT. | # Authors: Remi Flamary <[email protected]>
# Stanislas Chambon <[email protected]>
#
# License: MIT License
import matplotlib.pylab as pl
import ot | notebooks/plot_otda_classes.ipynb | aje/POT | mit |
Generate data | n_source_samples = 150
n_target_samples = 150
Xs, ys = ot.datasets.get_data_classif('3gauss', n_source_samples)
Xt, yt = ot.datasets.get_data_classif('3gauss2', n_target_samples) | notebooks/plot_otda_classes.ipynb | aje/POT | mit |
Instantiate the different transport algorithms and fit them | # EMD Transport
ot_emd = ot.da.EMDTransport()
ot_emd.fit(Xs=Xs, Xt=Xt)
# Sinkhorn Transport
ot_sinkhorn = ot.da.SinkhornTransport(reg_e=1e-1)
ot_sinkhorn.fit(Xs=Xs, Xt=Xt)
# Sinkhorn Transport with Group lasso regularization
ot_lpl1 = ot.da.SinkhornLpl1Transport(reg_e=1e-1, reg_cl=1e0)
ot_lpl1.fit(Xs=Xs, ys=ys, Xt=Xt)
# Sinkhorn Transport with Group lasso regularization l1l2
ot_l1l2 = ot.da.SinkhornL1l2Transport(reg_e=1e-1, reg_cl=2e0, max_iter=20,
verbose=True)
ot_l1l2.fit(Xs=Xs, ys=ys, Xt=Xt)
# transport source samples onto target samples
transp_Xs_emd = ot_emd.transform(Xs=Xs)
transp_Xs_sinkhorn = ot_sinkhorn.transform(Xs=Xs)
transp_Xs_lpl1 = ot_lpl1.transform(Xs=Xs)
transp_Xs_l1l2 = ot_l1l2.transform(Xs=Xs) | notebooks/plot_otda_classes.ipynb | aje/POT | mit |
Fig 1 : plots source and target samples | pl.figure(1, figsize=(10, 5))
pl.subplot(1, 2, 1)
pl.scatter(Xs[:, 0], Xs[:, 1], c=ys, marker='+', label='Source samples')
pl.xticks([])
pl.yticks([])
pl.legend(loc=0)
pl.title('Source samples')
pl.subplot(1, 2, 2)
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o', label='Target samples')
pl.xticks([])
pl.yticks([])
pl.legend(loc=0)
pl.title('Target samples')
pl.tight_layout() | notebooks/plot_otda_classes.ipynb | aje/POT | mit |
Fig 2 : plot optimal couplings and transported samples | param_img = {'interpolation': 'nearest', 'cmap': 'spectral'}
pl.figure(2, figsize=(15, 8))
pl.subplot(2, 4, 1)
pl.imshow(ot_emd.coupling_, **param_img)
pl.xticks([])
pl.yticks([])
pl.title('Optimal coupling\nEMDTransport')
pl.subplot(2, 4, 2)
pl.imshow(ot_sinkhorn.coupling_, **param_img)
pl.xticks([])
pl.yticks([])
pl.title('Optimal coupling\nSinkhornTransport')
pl.subplot(2, 4, 3)
pl.imshow(ot_lpl1.coupling_, **param_img)
pl.xticks([])
pl.yticks([])
pl.title('Optimal coupling\nSinkhornLpl1Transport')
pl.subplot(2, 4, 4)
pl.imshow(ot_l1l2.coupling_, **param_img)
pl.xticks([])
pl.yticks([])
pl.title('Optimal coupling\nSinkhornL1l2Transport')
pl.subplot(2, 4, 5)
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o',
label='Target samples', alpha=0.3)
pl.scatter(transp_Xs_emd[:, 0], transp_Xs_emd[:, 1], c=ys,
marker='+', label='Transp samples', s=30)
pl.xticks([])
pl.yticks([])
pl.title('Transported samples\nEmdTransport')
pl.legend(loc="lower left")
pl.subplot(2, 4, 6)
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o',
label='Target samples', alpha=0.3)
pl.scatter(transp_Xs_sinkhorn[:, 0], transp_Xs_sinkhorn[:, 1], c=ys,
marker='+', label='Transp samples', s=30)
pl.xticks([])
pl.yticks([])
pl.title('Transported samples\nSinkhornTransport')
pl.subplot(2, 4, 7)
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o',
label='Target samples', alpha=0.3)
pl.scatter(transp_Xs_lpl1[:, 0], transp_Xs_lpl1[:, 1], c=ys,
marker='+', label='Transp samples', s=30)
pl.xticks([])
pl.yticks([])
pl.title('Transported samples\nSinkhornLpl1Transport')
pl.subplot(2, 4, 8)
pl.scatter(Xt[:, 0], Xt[:, 1], c=yt, marker='o',
label='Target samples', alpha=0.3)
pl.scatter(transp_Xs_l1l2[:, 0], transp_Xs_l1l2[:, 1], c=ys,
marker='+', label='Transp samples', s=30)
pl.xticks([])
pl.yticks([])
pl.title('Transported samples\nSinkhornL1l2Transport')
pl.tight_layout()
pl.show() | notebooks/plot_otda_classes.ipynb | aje/POT | mit |
Create fake data | rf = 0.04
np.random.seed(1)
mus = np.random.normal(loc=0.05,scale=0.02,size=5) + rf
sigmas = (mus - rf)/0.3 + np.random.normal(loc=0.,scale=0.01,size=5)
num_years = 10
num_months_per_year = 12
num_days_per_month = 21
num_days_per_year = num_months_per_year*num_days_per_month
rdf = pd.DataFrame(
index = pd.date_range(
start="2008-01-02",
periods=num_years*num_months_per_year*num_days_per_month,
freq="B"
),
columns=['foo','bar','baz','fake1','fake2']
)
for i,mu in enumerate(mus):
sigma = sigmas[i]
rdf.iloc[:,i] = np.random.normal(
loc=mu/num_days_per_year,
scale=sigma/np.sqrt(num_days_per_year),
size=rdf.shape[0]
)
pdf = np.cumprod(1+rdf)*100
pdf.iloc[0,:] = 100
pdf.plot()
strategy_names = np.array(
[
'Equal Weight',
'Inv Vol'
]
)
runMonthlyAlgo = bt.algos.RunMonthly(
run_on_first_date=True,
run_on_end_of_period=True
)
selectAllAlgo = bt.algos.SelectAll()
rebalanceAlgo = bt.algos.Rebalance()
strats = []
tests = []
for i,s in enumerate(strategy_names):
if s == "Equal Weight":
wAlgo = bt.algos.WeighEqually()
elif s == "Inv Vol":
wAlgo = bt.algos.WeighInvVol()
strat = bt.Strategy(
s,
[
runMonthlyAlgo,
selectAllAlgo,
wAlgo,
rebalanceAlgo
]
)
strats.append(strat)
t = bt.Backtest(
strat,
pdf,
integer_positions = False,
progress_bar=False
)
tests.append(t)
combined_strategy = bt.Strategy(
'Combined',
algos = [
runMonthlyAlgo,
selectAllAlgo,
bt.algos.WeighEqually(),
rebalanceAlgo
],
children = [x.strategy for x in tests]
)
combined_test = bt.Backtest(
combined_strategy,
pdf,
integer_positions = False,
progress_bar = False
)
res = bt.run(combined_test)
res.prices.plot()
res.get_security_weights().plot() | examples/Strategy_Combination.ipynb | pmorissette/bt | mit |
In order to get the weights of each strategy, you can run each strategy, get the prices for each strategy, combine them into one price dataframe, run the combined strategy on the new data set. | strategy_names = np.array(
[
'Equal Weight',
'Inv Vol'
]
)
runMonthlyAlgo = bt.algos.RunMonthly(
run_on_first_date=True,
run_on_end_of_period=True
)
selectAllAlgo = bt.algos.SelectAll()
rebalanceAlgo = bt.algos.Rebalance()
strats = []
tests = []
results = []
for i,s in enumerate(strategy_names):
if s == "Equal Weight":
wAlgo = bt.algos.WeighEqually()
elif s == "Inv Vol":
wAlgo = bt.algos.WeighInvVol()
strat = bt.Strategy(
s,
[
runMonthlyAlgo,
selectAllAlgo,
wAlgo,
rebalanceAlgo
]
)
strats.append(strat)
t = bt.Backtest(
strat,
pdf,
integer_positions = False,
progress_bar=False
)
tests.append(t)
res = bt.run(t)
results.append(res)
fig, ax = plt.subplots(nrows=1,ncols=1)
for i,r in enumerate(results):
r.plot(ax=ax)
merged_prices_df = bt.merge(results[0].prices,results[1].prices)
combined_strategy = bt.Strategy(
'Combined',
algos = [
runMonthlyAlgo,
selectAllAlgo,
bt.algos.WeighEqually(),
rebalanceAlgo
]
)
combined_test = bt.Backtest(
combined_strategy,
merged_prices_df,
integer_positions = False,
progress_bar = False
)
res = bt.run(combined_test)
res.plot()
res.get_security_weights().plot() | examples/Strategy_Combination.ipynb | pmorissette/bt | mit |
To access a value, you index into it similarly to a list using square brackets.
value_of_key1 = my_dict['key1'] | raspberry_season = fruit_season['raspberry']
print(raspberry_season) | Lesson08_Dictionaries/Dictionary.ipynb | WomensCodingCircle/CodingCirclePython | mit |
Trying to access a key not in the dictionary throws an error | print(fruit_season['mangos']) | Lesson08_Dictionaries/Dictionary.ipynb | WomensCodingCircle/CodingCirclePython | mit |
To add an item to the dictionary set the value equal to the indexed keys
dict['new_key'] = value | fruit_season['strawberry'] = 'May'
print(fruit_season) | Lesson08_Dictionaries/Dictionary.ipynb | WomensCodingCircle/CodingCirclePython | mit |
To delete a key, use the del keyword
del dict['key to delete'] | del fruit_season['strawberry']
print(fruit_season) | Lesson08_Dictionaries/Dictionary.ipynb | WomensCodingCircle/CodingCirclePython | mit |
Rules on keys
Keys in dictionary must be unique. If you try to make a duplicate key, the data will be overwritten
Keys must be hashable. What this means is they must come from immutable values and be comparable. You can use strings, numbers, tuples, sets, (most) objects. You cannot use lists or dictionaries as keys. | duplicate_fruit_season = {
'raspberry': 'May',
'raspberry': 'June',
}
print(duplicate_fruit_season)
mutable_key = {
['watermelon', 'cantaloupe', 'honeydew']: 'July'
}
# The solution is to use a tuple instead
immutable_key = {
('watermelon', 'cantelope', 'honeydew'): 'July'
} | Lesson08_Dictionaries/Dictionary.ipynb | WomensCodingCircle/CodingCirclePython | mit |
TRY IT
Create a dictionary called vegetable_season with Eggplant-> July and Onion -> May
Dictionary Operators
The in operator returns a boolean for whether the key is in the dictionary or not.
key in dictionary | print('raspberry' in fruit_season)
print('mangos' in fruit_season) | Lesson08_Dictionaries/Dictionary.ipynb | WomensCodingCircle/CodingCirclePython | mit |
You can use this in if statement | if 'pineapple' in fruit_season:
print('Lets eat tropical fruit')
else:
print("Temperate fruit it is.") | Lesson08_Dictionaries/Dictionary.ipynb | WomensCodingCircle/CodingCirclePython | mit |
TRY IT
Check if 'broccoli' is in vegetable_season. If so, print 'Yum, little trees!'
Dictionaries and Loops
You can use a for in loop to loop through dictionaries
for key in dictionary:
print key | for fruit in fruit_season:
print("{0} is best in {1} (at least in Virginia)".format(fruit.title(), fruit_season[fruit])) | Lesson08_Dictionaries/Dictionary.ipynb | WomensCodingCircle/CodingCirclePython | mit |
Dictionary Methods
You can use the keys, values, or items methods to return lists of keys, values, or key-value tuples respectively.
You can then use these for sorting or for looping | print(list(fruit_season.keys()))
print(list(fruit_season.values()))
print(list(fruit_season.items()))
for key, value in list(fruit_season.items()):
print("In {0} eat a {1}".format(value, key))
print(sorted(fruit_season.keys())) | Lesson08_Dictionaries/Dictionary.ipynb | WomensCodingCircle/CodingCirclePython | mit |
TRY IT
Loop through the sorted keys of the vegetable_season dictionary. For each key, print the month it is in season
More complex dictionaries
Dictionary keys and values can be almost anything. The keys must be hashable which means it cannot change. That means that lists and dictionaries cannot be keys (but strings, tuples, and integers can).
Values can be just about anything, though. | my_complicated_dictionary = {
(1, 2, 3): 6,
'weevil': {
'e': 2,
'i': 1,
'l': 1,
'v': 1,
'w': 1,
},
9: [3, 3]
}
print(my_complicated_dictionary) | Lesson08_Dictionaries/Dictionary.ipynb | WomensCodingCircle/CodingCirclePython | mit |
Let's use this to create a more realistic fruit season dictionary | true_fruit_season = {
'raspberry': ['May', 'June'],
'apple': ['September', 'October', 'November', 'December'],
'peach': ['July', 'August'],
'grape': ['August', 'September', 'October']
}
print(true_fruit_season)
months = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December']
for month in months:
print(('It is {0}'.format(month)))
for fruit, season in list(true_fruit_season.items()):
if month in season:
print(("\tEat {0}".format(fruit))) | Lesson08_Dictionaries/Dictionary.ipynb | WomensCodingCircle/CodingCirclePython | mit |
TRY IT
Add a key to the true_fruit_season for 'watermelons' the season is July, August, and September
Project: Acrostic
Create an acrostic poem generator.
You will create a function that takes a name and generates an acrostic poem
Create a dictionary that has each of the capital letters as keys and an adjective that start with the letter as the value and store in variable named adjectives. (Reference: http://www.enchantedlearning.com/wordlist/adjectives.shtml)
Create a function called acrostic that takes one parameter name.
In the acrostic function capitalize the name (use the upper method)
For each letter in the name
Get the adjective corresponding to that letter and store in a variable called current_adj
Print out Letter-current_adj
Challenge instead of just one adjective have each letter's value be a list of adjectives. Use the random module to select a random adjective instead of always selecting the same one.
Bonus Material
Auto generating the dictionary for the acrostic: | # If you have a list of adjectives
my_dict = {}
# Imaging this is the full alphabet
for i in ['A', 'B', 'C']:
my_dict[i] = []
for i in ['Adoreable', 'Acceptable', 'Bad', 'Cute', 'Basic', 'Dumb']:
first_char = i[0]
if first_char in my_dict:
my_dict[first_char].append(i)
print(my_dict)
# Generating from a file
my_dict = {}
for i in ['A', 'B', 'C']:
my_dict[i] = []
# adjectives.txt has one adjective per line
with open('adjectives.txt') as fh:
for line in fh:
word = line.rstrip().title()
first_char = word[0]
if first_char in my_dict:
my_dict[first_char].append(word)
print(my_dict['A']) | Lesson08_Dictionaries/Dictionary.ipynb | WomensCodingCircle/CodingCirclePython | mit |
<a id='exponential'></a>
Exponential-Series Representation of Time-Dependent Quantum Objects
The eseries object in QuTiP is a representation of an exponential-series expansion of time-dependent quantum objects (a concept borrowed from the quantum optics toolbox).
An exponential series is parameterized by its amplitude coefficients $c_i$ and rates $r_i$, so that the series takes the form
$E(t) = \sum_i c_i e^{r_i t}$. The coefficients are typically quantum objects (i.e. states, operators, etc.), so that the value of the eseries also is a quantum object, and the rates can be either real or complex numbers (describing decay rates and oscillation frequencies, respectively). Note that all amplitude coefficients in an exponential series must be of the same dimensions and composition.
In QuTiP, an exponential series object is constructed by creating an instance of the class eseries: | es1 = eseries(sigmax(), 1j) | docs/guide/Eseries.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
where the first argument is the amplitude coefficient (here, the sigma-X operator), and the second argument is the rate. The eseries in this example represents the time-dependent operator $\sigma_x e^{i t}$. To add more terms to an eseries object we simply add objects using the + operator: | omega = 1.0
es2 = (eseries(0.5 * sigmax(), 1j * omega) + eseries(0.5 * sigmax(), -1j * omega)) | docs/guide/Eseries.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
The eseries in this example represents the operator $0.5 \sigma_x e^{i\omega t} + 0.5 \sigma_x e^{-i\omega t}$, which is the exponential series representation of $\sigma_x \cos(\omega t)$. Alternatively, we can also specify a list of amplitudes and rates when the eseries is created: | es2 = eseries([0.5 * sigmax(), 0.5 * sigmax()], [1j * omega, -1j * omega]) | docs/guide/Eseries.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
We can inspect the structure of an eseries object by printing it to the standard output console: | es2 | docs/guide/Eseries.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
and we can evaluate it at time $t$ by using the esval function or the value method: | esval(es2, 0.0) # equivalent to es2.value(0.0)
es2.value(0) | docs/guide/Eseries.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
or for a list of times [0.0, 1.0 * pi, 2.0 * pi]: | times = [0.0, 1.0 * np.pi, 2.0 * np.pi]
esval(es2, times)
es2.value(times) | docs/guide/Eseries.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
To calculate the expectation value of an time-dependent operator represented by an eseries, we use the expect function. For example, consider the operator $\sigma_x \cos(\omega t) + \sigma_z\sin(\omega t)$, and say we would like to know the expectation value of this operator for a spin in its excited state (rho = fock_dm(2,1) produce this state): | es3 = (eseries([0.5*sigmaz(), 0.5*sigmaz()], [1j, -1j]) +
eseries([-0.5j*sigmax(), 0.5j*sigmax()], [1j, -1j]))
rho = fock_dm(2, 1)
es3_expect = expect(rho, es3)
es3_expect | docs/guide/Eseries.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
Note the expectation value of the eseries object, expect(rho, es3), itself is an eseries, but with amplitude coefficients that are c-numbers instead of quantum operators. To evaluate the c-number eseries at the times times we use es3_expect.value(times) or equivalently esval(es3_expect, times). | es3_expect.value([0.0, pi/2]) | docs/guide/Eseries.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
<a id='applications'></a>
Applications of Exponential Series
The exponential series formalism can be useful for the time-evolution of quantum systems. One approach to calculating the time evolution of a quantum system is to diagonalize its Hamiltonian (or Liouvillian, for dissipative systems) and to express the propagator (e.g., $\exp(-iHt) \rho \exp(iHt)$) as an exponential series.
The QuTiP function ode2es and essolve use this method to evolve quantum systems in time. The exponential series approach is particularly suitable for cases when the same system is to be evolved for many different initial states, since the diagonalization only needs to be performed once (as opposed to e.g. the ode solver that would need to be ran independently for each initial state).
As an example, consider a spin-1/2 with a Hamiltonian pointing in the $\sigma_z$ direction, and that is subject to noise causing relaxation. For a spin originally is in the up state, we can create an eseries object describing its dynamics by using the es2ode function: | psi0 = basis(2,1)
H = sigmaz()
L = liouvillian(H, [sqrt(1.0) * destroy(2)])
es = ode2es(L, psi0) | docs/guide/Eseries.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
The ode2es function diagonalizes the Liouvillian $L$ and creates an exponential series with the correct eigenfrequencies and amplitudes for the initial state
$\psi_0$ (psi0).
We can examine the resulting eseries object by printing a text representation: | es | docs/guide/Eseries.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
or by evaluating it and arbitrary points in time (here at 0.0 and 1.0): | es.value([0.0, 1.0]) | docs/guide/Eseries.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
and the expectation value of the exponential series can be calculated using the expect function: | es_expect = expect(sigmaz(), es) | docs/guide/Eseries.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
The result es_expect is now an exponential series with c-numbers as amplitudes, which easily can be evaluated at arbitrary times: | times = linspace(0.0, 10.0, 100)
sz_expect = es_expect.value(times)
plot(times, sz_expect, lw=2)
xlabel("Time", fontsize=14)
ylabel("Expectation value of sigma-z", fontsize=14)
show()
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/guide.css", "r").read()
return HTML(styles)
css_styling() | docs/guide/Eseries.ipynb | qutip/qutip-notebooks | lgpl-3.0 |
Variables, numbers and math basic operations
Take the following statement, x = 33. In this statement, x is a variable with a value of 33 (integer literal). As its name suggests, variables can be reassigned when executing a program.
Simple data types in Python include: integers, floating point numbers, strings and Boolean (True/False) values. In the following examples, the variables Ag and Au are assigned to integer values. The variables are then reassigned to floating point numbers and some simple math operations demonstrated.
Au and Ag variables assigned to integer values
Integers are whole numbers, i.e., 1, 0, -20 etc. | Ag = 107
Au = 197
type(Ag)
type(Au) | Python_tutorial.ipynb | Michaelt293/ANZSMS-Programming-workshop | cc0-1.0 |
Au and Ag variables reassigned to floating point values
Floats use a decimal point or exponential notation. Note that values such as 20.0, 1.0, 0.0 etc. are floats, not integers. Also, floating point values may not be a true representation of the number since computer use a binary (base-2) number system. This leads to floating point errors when working with floating point values. | Ag = 106.9
Au = 197.0
type(Ag)
type(Au) | Python_tutorial.ipynb | Michaelt293/ANZSMS-Programming-workshop | cc0-1.0 |
Math operations
In Python 3, math operations behave as you would expect. In Python 2, division is the same as floor division when working with integers! | Au + Ag # addition (Note: You can add comments to your code by using the # symbol)
Au - Ag # subtraction
Au * 5 # multiplication
Ag ** 2 # exponential - mass of silver squared in this case
Au / Ag # division
Au // Ag # floor division | Python_tutorial.ipynb | Michaelt293/ANZSMS-Programming-workshop | cc0-1.0 |
Type conversion
We can convert a float to an integer or an integer to a float. Conversions of string representations of numbers to actual numbers (integers and floats) is also common in Python programming. | integer_Ag = int(Ag)
integer_Ag
type(integer_Ag) | Python_tutorial.ipynb | Michaelt293/ANZSMS-Programming-workshop | cc0-1.0 |
The modulo operator (%)
The modulo (%) operator gives the remainder from division. This is commonly used to check whether a number is odd or even. In mass spectrometry, we could use this to test whether an ion has an odd number of nitrogen atoms. For example,
mz = 114; if mz % 2 == 0:; print("Ion has an odd number of nitrogens") | modulo = integer_Ag % 2
if modulo == 0:
print("Ion is even") | Python_tutorial.ipynb | Michaelt293/ANZSMS-Programming-workshop | cc0-1.0 |
Strings and indexing
In the following example, we have a file name represented as a string. From this string, we can use to indexing to select certain characters or substrings from the file name string.
Strings
String are simply strings of characters (letters, numbers, symbols etc.). Strings are indicated using either single quotes ('This is a string') or double quotes ("This is another string"). Multiline strings can be made using triple quotes (i.e. """Multiline string"""). | MS2_spectrum = "Liver_MS2_406.raw"
MS2_spectrum | Python_tutorial.ipynb | Michaelt293/ANZSMS-Programming-workshop | cc0-1.0 |
Indexing
Single characters are indexed using square brackets after the variable name. In Python, the first character is 0. Characters may also be index from the end of the string (starting at -1). | MS2_spectrum[0]
MS2_spectrum[-1] | Python_tutorial.ipynb | Michaelt293/ANZSMS-Programming-workshop | cc0-1.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.