markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Вычисление сумм
В статистике часто приходится считать выборочное среднее, т.е. по данной выборке значений $x_k$, $k=1..N$, нужно вычислить
$$\bar x=\frac1N\sum_{k=1}^N x_k.$$
С точки зрения математики не имеет значения, как считать указанную сумму, так как результат сложения всегда будет один и тот же.
Однако при вычислениях с плавающей запятой ответ будет зависеть от порядка выполнения операций, хотя бы потому, что сложения чисел с плавающей запятой не ассоциативно.
Но будет ли зависеть точность вычислений от порядка операций?
Давайте это проверим.
Сконструируем выборку таким образом, что сумма всех элементов равна $1$, и порядок элементов меняется в широком диапазоне.
Для этого разобьем единицу на $K$ частей, и $k$-ую часть разобьем на $7^k$ равных значений.
Полученные элементы перемешаем. | base=10 # параметр, может принимать любые целые значения > 1
def exact_sum(K):
"""Точное значение суммы всех элементов."""
return 1.
def samples(K):
""""Элементы выборки"."""
# создаем K частей из base^k одинаковых значений
parts=[np.full((base**k,), float(base)**(-k)/K) for k in range(0, K)]
# создаем выборку объединяя части
samples=np.concatenate(parts)
# перемешиваем элементы выборки и возвращаем
return np.random.permutation(samples)
def direct_sum(x):
"""Последовательная сумма всех элементов вектора x"""
s=0.
for e in x:
s+=e
return s
def number_of_samples(K):
"""Число элементов в выборке"""
return np.sum([base**k for k in range(0, K)])
def exact_mean(K):
"""Значение среднего арифметического по выборке с близкой к машинной точностью."""
return 1./number_of_samples(K)
def exact_variance(K):
"""Значение оценки дисперсии с близкой к машинной точностью."""
# разные значения элементов выборки
values=np.asarray([float(base)**(-k)/K for k in range(0, K)], dtype=np.double)
# сколько раз значение встречается в выборке
count=np.asarray([base**k for k in range(0, K)])
return np.sum(count*(values-exact_mean(K))**2)/number_of_samples(K) | practice/What does mean mean mean.ipynb | alepoydes/introduction-to-numerical-simulation | mit |
Создадим выборку из значений, отличающихся на 6 порядков, и просуммируем элементы выборки. | K=7 # число слагаемых
x=samples(K) # сохраняем выборку в массив
print("Число элементов:", len(x))
print("Самое маленькое и большое значения:", np.min(x), np.max(x))
exact_sum_for_x=exact_sum(K) # значение суммы с близкой к машинной погрешностью
direct_sum_for_x=direct_sum(x) # сумма всех элементов по порядку
def relative_error(x0, x):
"""Погрешность x при точном значении x0"""
return np.abs(x0-x)/np.abs(x)
print("Погрешность прямого суммирования:", relative_error(exact_sum_for_x, direct_sum_for_x)) | practice/What does mean mean mean.ipynb | alepoydes/introduction-to-numerical-simulation | mit |
Попробуем теперь просуммировать элементы в порядке возрастания. | sorted_x=x[np.argsort(x)]
sorted_sum_for_x=direct_sum(sorted_x)
print("Погрешность суммирования по возрастанию:", relative_error(exact_sum_for_x, sorted_sum_for_x)) | practice/What does mean mean mean.ipynb | alepoydes/introduction-to-numerical-simulation | mit |
Попробуем просуммировать в порядке убывания. | sorted_x=x[np.argsort(x)[::-1]]
sorted_sum_for_x=direct_sum(sorted_x)
print("Погрешность суммирования по убыванию:", relative_error(exact_sum_for_x, sorted_sum_for_x)) | practice/What does mean mean mean.ipynb | alepoydes/introduction-to-numerical-simulation | mit |
Таким образом погрешность результата зависит от порядка суммирования.
Как можно объяснить этот эффект?
На практике суммирование предпочтительно проводить не наивным способом, а компенсационным суммированием (см. алгоритм Кэхэна. | def Kahan_sum(x):
s=0.0 # частичная сумма
c=0.0 # сумма погрешностей
for i in x:
y=i-c # первоначально y равно следующему элементу последовательности
t=s+y # сумма s может быть велика, поэтому младшие биты y будут потеряны
c=(t-s)-y # (t-s) отбрасывает старшие биты, вычитание y восстанавливает младшие биты
s=t # новое значение старших битов суммы
return s
Kahan_sum_for_x=Kahan_sum(x) # сумма всех элементов по порядку
print("Погрешность суммирования по Кэхэну:", relative_error(exact_sum_for_x, Kahan_sum_for_x)) | practice/What does mean mean mean.ipynb | alepoydes/introduction-to-numerical-simulation | mit |
Задания
Объясните различие в погрешностях при различных порядках суммирования.
Почему алгорит Кэхэна имеет значительно лучшую точность, чем последовательное суммирование?
Получим ли мы те же значения погрешностей, если будем суммировать последовательность со слагаемыми разных знаков? Проверьте на следующей последовательности:
$$x_k=\sin k.$$
Что произойдет с погрешностью, если элементы выборки с разными знаками упорядочить по возрастанию? По возрастанию абсолютной величины? Проверьте экспериментально.
Подсказка
Сумма первых $N$ элементов последовательности из задания 4 может быть найдена явна:
$$\sum_{k=1}^N\sin k=\frac{1}{2}\bigg(\sin n-\mathrm{ctg}\frac{1}{2}\cos n+\mathrm{ctg}\frac{1}{2}\bigg).$$
Вычисление дисперсии
Кроме вычисления оценки математического ожидания, часто требуется вычислить оценку среднеквадратического отклонения или его квадрата - дисперсии.
Дисперсия $D[X]$ случайной величины $X$ определена через математическое ожидание $E[X]$ следующим образом:
$$D[X]=E[(X-E[X])^2].$$
Для оценки дисперсии мы можем воспользоваться формулой для оценки математического ожидания через выборочное среднее:
$$E[X]\approx\frac1N\sum_{n=1}^N x_n,$$
т.е. можно предложить следующую формулу для оценки дисперсии (первая формула):
$$D[X]\approx\frac1N\sum_{n=1}^N\left(x_n-\frac1N\sum_{n=1}^Nx_n\right)^2.$$
Полученная оценка является смещенной, т.е. ее мат. ожидание не совпадает с верным значением дисперсии, поэтому на практике нужно использовать следующую несмещенную оценку:
$$D[X]\approx\frac1{N-1}\sum_{n=1}^N\left(x_n-\frac1N\sum_{n=1}^Nx_n\right)^2,$$
однако в этой работе мы удовлетворимся смещенной оценкой.
К сожалению, наша формула не позволяет обновлять значения дисперсии при добавлении значения в выборку, так как требует двух проходов по выборке: сначала считается среднее, затем считается дисперсия.
Однако в учебниках теории вероятности можно встретить и другую эквивалентную формулу для дисперсии, получим ее, опираясь на свойства мат. ожидания:
$$D[X]=E[(X-E[X])^2]=E[X^2-2E[X]X+E[X]^2]=E[X^2]-2E[X]E[X]+E[E[X]^2]=E[X^2]-E[X]^2.$$
Снова заменяя мат. ожидание на выборочное среднее, получаем новую оценку для дисперсии (вторая формула):
$$D[X]\approx \frac1N\sum_{n=1}^N x_n^2-\left(\frac1N\sum_{n=1}^Nx_n\right)^2.$$
Вторая формулы для вычисления дисперсии более привлекательна, так как обе суммы могут вычисляться одновременно, а значения мат. ожидания и дисперсии вычислить, последовательно добавляя значения.
Действительно, введем обозначения для оценок мат. ожидания и дисперсии по первым $n$ членам выборки:
$$E_n=\frac1n\sum_{k=1}^n x_n,\quad D_n=\frac1n\sum_{k=1}^n x_n^2-E_n^2.$$
Отсюда легко вывести рекуррентные формулы:
$$E_{n}=\frac{x_{n}+(n-1)E_{n-1}}{n},\quad D_{n}=\frac{x_{n}^2+(n-1)D_{n-1}}{n}-E_{n}^2.$$
Хотя эти формулы и просты, погрешность вычислений по второй формуле может быть значительно выше, чем по первой. Проверим это.
Рассмотрим выборку, среднее для которой на порядки больше среднеквадратического отклонения. Пусть ровно половина значений больше среднего на $delta$, а половина меньше на $delta$.
Оценка дисперсии и мат. ожидания в этом случае легко вычисляются явно. | # параметры выборки
mean=1e6 # среднее
delta=1e-5 # величина отклонения от среднего
def samples(N_over_two):
"""Генерирует выборку из 2*N_over_two значений с данным средним и среднеквадратическим
отклонением."""
x=np.full((2*N_over_two,), mean, dtype=np.double)
x[:N_over_two]+=delta
x[N_over_two:]-=delta
return np.random.permutation(x)
def exact_mean():
"""Значение среднего арифметического по выборке с близкой к машинной точностью."""
return mean
def exact_variance():
"""Значение оценки дисперсии с близкой к машинной точностью."""
return delta**2
x=samples(1000000)
print("Размер выборки:", len(x))
print("Среднее значение:", exact_mean())
print("Оценка дисперсии:", exact_variance())
print("Ошибка среднего для встроенной функции:",relative_error(exact_mean(),np.mean(x)))
print("Ошибка дисперсии для встроенной функции:",relative_error(exact_variance(),np.var(x)))
def direct_mean(x):
"""Среднее через последовательное суммирование."""
return direct_sum(x)/len(x)
print("Ошибка среднего для последовательного суммирования:",relative_error(exact_mean(),direct_mean(x)))
def direct_second_var(x):
"""Вторая оценка дисперсии через последовательное суммирование."""
return direct_mean(x**2)-direct_mean(x)**2
def online_second_var(x):
"""Вторая оценка дисперсии через один проход по выборке"""
m=x[0] # накопленное среднее
m2=x[0]**2 # накопленное среднее квадратов
for n in range(1,len(x)):
m=(m*(n-1)+x[n])/n
m2=(m2*(n-1)+x[n]**2)/n
return m2-m**2
print("Ошибка второй оценки дисперсии для последовательного суммирования:",relative_error(exact_variance(),direct_second_var(x)))
print("Ошибка второй оценки дисперсии для однопроходного суммирования:",relative_error(exact_variance(),online_second_var(x)))
def direct_first_var(x):
"""Первая оценка дисперсии через последовательное суммирование."""
return direct_mean((x-direct_mean(x))**2)
print("Ошибка первой оценки дисперсии для последовательного суммирования:",relative_error(exact_variance(),direct_first_var(x)))
| practice/What does mean mean mean.ipynb | alepoydes/introduction-to-numerical-simulation | mit |
Как мы видим, суммирование по первой формуле дает наиболее точный результат, суммирование по второй формуле менее точно, а однопроходная формула наименее точна.
Задания
Обьясните, почему формулы оценки дисперсии имеют разные погрешности, хотя чтобы их применить, нужно выполнить одни и те же действия, но в разном порядке? Оцените погрешности обоих формул.
Предложите однопроходную формулу для оценки мат. ожидания и дисперсии, основанную на первой формуле для дисперсии. Воспользуйтесь компенсационным суммированием, чтобы увеличить точность. Попробуйте увеличить точность вычисления по сравнению со второй формулой хотя бы на два порядка.
Суммирование ряда для экспоненты
Показательная функция имеет одно из самых простых разложений в ряд Тейлора:
$$e^x = \sum_{k=0}^\infty \frac{x^k}{k!}.$$
Естественным желанием при решении задачи вычисления показательной функции является воспользоваться этим рядом.
В данном разделе мы рассмотрим результативность этого подхода.
Так как на практике мы не можем суммировать бесконечное число слагаемых, то будем приближать ряд его частичной суммой:
$$e^x \approx \sum_{k=0}^N \frac{x^k}{k!}.$$
Так как частичная сумма является многочленом, то для практического счета удобно воспользоваться (схемой Горнера)[ru.wikipedia.org/wiki/Схема_Горнера]:
$$e^x \approx 1+x\bigg(1+\frac{x}{2}\bigg(1+\frac{x}{3}\bigg(1+\frac{x}{4}\bigg(\ldots+\frac{x}{N}\bigg(1\bigg)\ldots\bigg)\bigg)\bigg)\bigg).$$
Проведем эксперимент по оценки точности этого разложения.
Сравнивать будем с библиотечной функцией numpy.exp, которая не дает совершенно точный ответ.
Оценим погрешность библитечной функции, предполагая, что она вычисляется с максимальной возможной точностью.
Число обусловленности показательной функции для относительной погрешности равно $\kappa_{exp}(x)=|x|$,
тогда учитывая погрешности округления до числа с плавающей запятой, мы ожидаем предельную погрешность результата не менее $|x|\epsilon/2+\epsilon$. | def exp_taylor(x, N=None):
"""N-ая частичная сумма ряда Тейлора для экспоненты."""
acc = 1 # k-ая частичная сумму. Начинаем с k=0.
xk = 1 # Степени x^k.
inv_fact = 1 # 1/k!.
for k in range(1, N+1):
xk = xk*x
inv_fact /= k
acc += xk*inv_fact
return acc
def exp_horner(x, N=None):
"""N-ая частичная сумма ряда Тейлора для экспоненты методом Горнера."""
if N<=0: return 1 # Избегаем деления на ноль.
acc = 1 # Выражение во вложенных скобках в схеме Горнера
for k in range(N, 0, -1):
acc = acc/k*x + 1
return acc
def make_exp_test(fns, args={}, xmin=-1, xmax=1):
"""Проводит тест приближения fn показательной функции."""
x = np.linspace(xmin, xmax, 1000)
standard = np.exp(x)
theoretical_relative_error = (np.abs(x)/2+1)*np.finfo(float).eps
theoretical_absolute_error = theoretical_relative_error * standard
fig, ax1 = plt.subplots(1,1,figsize=(10,5))
ax2 = plt.twinx(ax1)
ax1.set_xlabel("Argument")
ax1.set_ylabel("Absolute error")
ax2.set_ylabel("Relative error")
ax1.semilogy(x, theoretical_absolute_error, '-r')
line, = ax2.semilogy(x, theoretical_relative_error, '--r')
line.set_label("theory (relative)")
for fn in fns:
subject = fn(x, **args)
absolute_error = np.abs(standard-subject)
relative_error = absolute_error/standard
ax1.semilogy(x, absolute_error, '-')
line, = ax2.semilogy(x, relative_error, '--')
line.set_label("{} (relative)".format(fn.__name__))
plt.legend()
plt.show()
make_exp_test([exp_taylor, exp_horner], args={"N": 3}, xmin=-0.001, xmax=0.001)
make_exp_test([exp_taylor, exp_horner], args={"N": 3}, xmin=-1, xmax=1)
make_exp_test([exp_taylor, exp_horner], args={"N": 3}, xmin=-10, xmax=10) | practice/What does mean mean mean.ipynb | alepoydes/introduction-to-numerical-simulation | mit |
Ясно, что 4-x слагаемых слишком мало, чтобы хорошо приблизить ряд. Попробуем взять больше. | make_exp_test([exp_taylor, exp_horner], args={"N": 15}, xmin=-0.001, xmax=0.001)
make_exp_test([exp_taylor, exp_horner], args={"N": 15}, xmin=-1, xmax=1)
make_exp_test([exp_taylor, exp_horner], args={"N": 15}, xmin=-10, xmax=10) | practice/What does mean mean mean.ipynb | alepoydes/introduction-to-numerical-simulation | mit |
Точность приближения растет с увеличением числа слагаемых, однако даже для умеренно больших аргументов ни одного верного знака в ответе не получается. Посмотрим, как погрешность изменяется в зависимости от числа слагаемых. | def cum_exp_taylor(x, N=None):
"""Вычисляет все частичные суммы ряда Тейлора для экспоненты по N-ую включительно."""
acc = np.empty(N+1, dtype=float)
acc[0] = 1 # k-ая частичная сумму. Начинаем с k=0.
xk = 1 # Степени x^k.
inv_fact = 1 # 1/k!.
for k in range(1, N+1):
xk = xk*x
inv_fact /= k
acc[k] = acc[k-1]+xk*inv_fact
return acc
x = -10
standard = np.exp(x)
theoretical_relative_error = (np.abs(x)/2+1)*np.finfo(float).eps
theoretical_absolute_error = theoretical_relative_error * standard
Ns = np.arange(100)
partial_sums = cum_exp_taylor(x, N=Ns[-1])
absolute_error = np.abs(partial_sums-standard)
relative_error = absolute_error/standard
fig, ax1 = plt.subplots(1,1,figsize=(10,5))
ax2 = plt.twinx(ax1)
ax1.set_xlabel("Argument")
ax1.set_ylabel("Absolute error")
ax2.set_ylabel("Relative error")
ax1.semilogy(Ns, Ns*0+theoretical_absolute_error, '-r')
line, = ax2.semilogy(Ns, Ns*0+theoretical_relative_error, '--r')
line.set_label("theory (relative)")
ax1.semilogy(Ns, absolute_error, '-')
line, = ax2.semilogy(Ns, relative_error, '--')
line.set_label("experiment (relative)")
plt.legend()
plt.show() | practice/What does mean mean mean.ipynb | alepoydes/introduction-to-numerical-simulation | mit |
Note: The data in reviews.txt we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way. | len(reviews)
reviews[0]
labels[0] | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Lesson: Develop a Predictive Theory<a id='lesson_2'></a> | print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Project 1: Quick Theory Validation<a id='project_1'></a>
There are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.
You'll find the Counter class to be useful in this exercise, as well as the numpy library. | from collections import Counter
import numpy as np | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
We'll create three Counter objects, one for words from postive reviews, one for words from negative reviews, and one for all the words. | # Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter() | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
TODO: Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.
Note: Throughout these projects, you should use split(' ') to divide a piece of text (such as a review) into individual words. If you use split() instead, you'll get slightly different results than what the videos and solutions show. | # TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used. | # Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common() | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
TODO: Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in pos_neg_ratios.
Hint: the positive-to-negative ratio for a given word can be calculated with positive_counts[word] / float(negative_counts[word]+1). Notice the +1 in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews. | # Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()
# TODO: Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Examine the ratios you've calculated for a few words: | print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"])) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Looking closely at the values you just calculated, we see the following:
Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.
Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.
Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The +1 we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.
Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:
Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.
When comparing absolute values it's easier to do that around zero than one.
To fix these issues, we'll convert all of our ratios to new values using logarithms.
TODO: Go through all the ratios you calculated and convert them to logarithms. (i.e. use np.log(ratio))
In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs. | # TODO: Convert ratios to logs | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Examine the new ratios you've calculated for the same words from before: | print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"])) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above 1, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below -1. It's now clear that both of these words are associated with specific, opposing sentiments.
Now run the following cells to see more ratios.
The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see all the words in the list.)
The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write reversed(pos_neg_ratios.most_common()).)
You should continue to see values similar to the earlier ones we checked – neutral words will be close to 0, words will get more positive as their ratios approach and go above 1, and words will get more negative as their ratios approach and go below -1. That's why we decided to use the logs instead of the raw ratios. | # words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1] | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
End of Project 1.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Transforming Text into Numbers<a id='lesson_3'></a>
The cells here include code Andrew shows in the next video. We've included it so you can run the code along with the video without having to type in everything. | from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png') | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Project 2: Creating the Input/Output Data<a id='project_2'></a>
TODO: Create a set named vocab that contains every word in the vocabulary. | # TODO: Create set named "vocab" containing all of the words from all of the reviews
vocab = None | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following cell to check your vocabulary size. If everything worked correctly, it should print 74074 | vocab_size = len(vocab)
print(vocab_size) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. layer_0 is the input layer, layer_1 is a hidden layer, and layer_2 is the output layer. | from IPython.display import Image
Image(filename='sentiment_network_2.png') | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
TODO: Create a numpy array called layer_0 and initialize it to all zeros. You will find the zeros function particularly helpful here. Be sure you create layer_0 as a 2-dimensional matrix with 1 row and vocab_size columns. | # TODO: Create layer_0 matrix with dimensions 1 by vocab_size, initially filled with zeros
layer_0 = None | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following cell. It should display (1, 74074) | layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png') | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
layer_0 contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word. | # Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
TODO: Complete the implementation of update_input_layer. It should count
how many times each word is used in the given review, and then store
those counts at the appropriate indices inside layer_0. | def update_input_layer(review):
""" Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
"""
global layer_0
# clear out previous state by resetting the layer to be all 0s
layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0 | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in layer_0. | update_input_layer(reviews[0])
layer_0 | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
TODO: Complete the implementation of get_target_for_labels. It should return 0 or 1,
depending on whether the given label is NEGATIVE or POSITIVE, respectively. | def get_target_for_label(label):
"""Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
"""
# TODO: Your code here | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following two cells. They should print out'POSITIVE' and 1, respectively. | labels[0]
get_target_for_label(labels[0]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following two cells. They should print out 'NEGATIVE' and 0, respectively. | labels[1]
get_target_for_label(labels[1]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
End of Project 2.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Project 3: Building a Neural Network<a id='project_3'></a>
TODO: We've included the framework of a class called SentimentNetork. Implement all of the items marked TODO in the code. These include doing the following:
- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer.
- Do not add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.
- Re-use the code from earlier in this notebook to create the training data (see TODOs in the code)
- Implement the pre_process_data function to create the vocabulary for our training data generating functions
- Ensure train trains over the entire corpus
Where to Get Help if You Need it
Re-watch earlier Udacity lectures
Chapters 3-5 - Grokking Deep Learning - (Check inside your classroom for a discount code) | import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set()
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
# Create a dictionary of labels mapped to index positions
self.label2index = {}
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = None
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = None
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
pass
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
pass
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
pass
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
pass
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
pass
| sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following cell to create a SentimentNetwork that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of 0.1. | mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set).
We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from. | mlp.test(reviews[-1000:],labels[-1000:]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing. | mlp.train(reviews[:-1000],labels[:-1000]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, 0.01, and then train the new network. | mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, 0.001, and then train the new network. | mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
With a learning rate of 0.001, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson.
End of Project 3.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Understanding Neural Noise<a id='lesson_4'></a>
The following cells include includes the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything. | from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common() | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Project 4: Reducing Noise in Our Input Data<a id='project_4'></a>
TODO: Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:
* Copy the SentimentNetwork class you created earlier into the following cell.
* Modify update_input_layer so it does not count how many times each word is used, but rather just stores whether or not a word was used. | # TODO: -Copy the SentimentNetwork class from Projet 3 lesson
# -Modify it to reduce noise, like in the video | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of 0.1. | mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
That should have trained much better than the earlier attempts. It's still not wonderful, but it should have improved dramatically. Run the following cell to test your model with 1000 predictions. | mlp.test(reviews[-1000:],labels[-1000:]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
End of Project 4.
Andrew's solution was actually in the previous video, so rewatch that video if you had any problems with that project. Then continue on to the next lesson.
Analyzing Inefficiencies in our Network<a id='lesson_5'></a>
The following cells include the code Andrew shows in the next video. We've included it here so you can run the cells along with the video without having to type in everything. | Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1 | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Project 5: Making our Network More Efficient<a id='project_5'></a>
TODO: Make the SentimentNetwork class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:
* Copy the SentimentNetwork class from the previous project into the following cell.
* Remove the update_input_layer function - you will not need it in this version.
* Modify init_network:
You no longer need a separate input layer, so remove any mention of self.layer_0
You will be dealing with the old hidden layer more directly, so create self.layer_1, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero
Modify train:
Change the name of the input parameter training_reviews to training_reviews_raw. This will help with the next step.
At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from word2index) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local list variable named training_reviews that should contain a list for each review in training_reviews_raw. Those lists should contain the indices for words found in the review.
Remove call to update_input_layer
Use self's layer_1 instead of a local layer_1 object.
In the forward pass, replace the code that updates layer_1 with new logic that only adds the weights for the indices used in the review.
When updating weights_0_1, only update the individual weights that were used in the forward pass.
Modify run:
Remove call to update_input_layer
Use self's layer_1 instead of a local layer_1 object.
Much like you did in train, you will need to pre-process the review so you can work with word indices, then update layer_1 by adding weights for the indices used in the review. | # TODO: -Copy the SentimentNetwork class from Project 4 lesson
# -Modify it according to the above instructions | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following cell to recreate the network and train it once again. | mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions. | mlp.test(reviews[-1000:],labels[-1000:]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
End of Project 5.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Further Noise Reduction<a id='lesson_6'></a> | Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Project 6: Reducing Noise by Strategically Reducing the Vocabulary<a id='project_6'></a>
TODO: Improve SentimentNetwork's performance by reducing more noise in the vocabulary. Specifically, do the following:
* Copy the SentimentNetwork class from the previous project into the following cell.
* Modify pre_process_data:
Add two additional parameters: min_count and polarity_cutoff
Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)
Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like.
Change so words are only added to the vocabulary if they occur in the vocabulary more than min_count times.
Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least polarity_cutoff
Modify __init__:
Add the same two parameters (min_count and polarity_cutoff) and use them when you call pre_process_data | # TODO: -Copy the SentimentNetwork class from Project 5 lesson
# -Modify it according to the above instructions | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following cell to train your network with a small polarity cutoff. | mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
And run the following cell to test it's performance. It should be | mlp.test(reviews[-1000:],labels[-1000:]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
Run the following cell to train your network with a much larger polarity cutoff. | mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
And run the following cell to test it's performance. | mlp.test(reviews[-1000:],labels[-1000:]) | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
End of Project 6.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Analysis: What's Going on in the Weights?<a id='lesson_7'></a> | mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words | sentiment-network/Sentiment_Classification_Projects.ipynb | yuvrajsingh86/DeepLearning_Udacity | mit |
The plate lies in the $xy$-plane with the surface at $z = 0$. The atoms lie in the $xz$-plane with $z>0$.
We can set the angle between the interatomic axis and the z-axis theta and the center of mass distance from the surface distance_surface. distance_atom defines the interatomic distances for which the pair potential is plotted. The units of the respective quantities are given as comments.
Be careful: theta = np.pi/2 corresponds to horizontal alignment of the two atoms with respect to the surface. For different angles, large interatomic distances distance_atom might lead to one of the atoms being placed inside the plate. Make sure that distance_surface is larger than distance_atom*np.cos(theta)/2 | theta = np.pi/2 # rad
distance_atoms = 10 # µm
distance_surface = np.linspace(distance_atoms*np.abs(np.cos(theta))/2, 2*distance_atoms,30) # µm | doc/sphinx/examples_python/vdw_near_surface.ipynb | hmenke/pairinteraction | gpl-3.0 |
Next we define the state that we are interested in using pairinteraction's StateOne class . As shown in Figures 4 and 5 of Phys. Rev. A 96, 062509 (2017) we expect changes of about 50% for the $C_6$ coefficient of the $|69s_{1/2},m_j=1/2;72s_{1/2},m_j=1/2\rangle$ pair state of Rubidium, so this provides a good example.
We set up the one-atom system using restrictions of energy, main quantum number n and angular momentum l. This is done by means of the restrict... functions in SystemOne. | state_one1 = pi.StateOne("Rb", 69, 0, 0.5, 0.5)
state_one2 = pi.StateOne("Rb", 72, 0, 0.5, 0.5)
# Set up one-atom system
system_one = pi.SystemOne(state_one1.getSpecies(), cache)
system_one.restrictEnergy(min(state_one1.getEnergy(),state_one2.getEnergy()) - 30, \
max(state_one1.getEnergy(),state_one2.getEnergy()) + 30)
system_one.restrictN(min(state_one1.getN(),state_one2.getN()) - 3, \
max(state_one1.getN(),state_one2.getN()) + 3)
system_one.restrictL(min(state_one1.getL(),state_one2.getL()) - 1, \
max(state_one1.getL(),state_one2.getL()) + 1) | doc/sphinx/examples_python/vdw_near_surface.ipynb | hmenke/pairinteraction | gpl-3.0 |
The pair state state_two is created from the one atom states state_one1 and state_one2 using the StateTwo class.
From the previously set up system_one we define system_two using SystemTwo class. This class also contains methods set.. to set angle, distance, surface distance and to enableGreenTensor in order implement a surface. | # Set up pair state
state_two = pi.StateTwo(state_one1, state_one2)
# Set up two-atom system
system_two = pi.SystemTwo(system_one, system_one, cache)
system_two.restrictEnergy(state_two.getEnergy() - 3, state_two.getEnergy() + 3)
system_two.setAngle(theta)
system_two.setDistance(distance_atoms)
system_two.setSurfaceDistance(distance_surface[0])
system_two.enableGreenTensor(True)
system_two.buildInteraction() | doc/sphinx/examples_python/vdw_near_surface.ipynb | hmenke/pairinteraction | gpl-3.0 |
We calculate the $C_6$ coefficients. The energyshift is given by the difference between the interaction energy at given surface_distance and the unperturbed energy of the two atom state state_two.getEnergy(). The $C_6$ coefficient is then given by the product of energyshift and distance_atoms**6.
idx is the index of the two atom state. The command getOverlap(state_two, 0, -theta, 0) rotates the quantisation axis of state_two by theta around the y-axis. The rotation is given by the Euler angles (0, -theta, 0) in zyz convention. The negative sign of theta is needed because the Euler angles used by pairinteraction represent a rotation of the coordinate system. Thus, the quantisation axis has to be rotated by the inverse angle. | # Calculate C6 coefficients
C6 = []
for d in distance_surface:
system_two.setSurfaceDistance(d)
system_two.diagonalize()
idx = np.argmax(system_two.getOverlap(state_two, 0, -theta, 0))
energyshift = system_two.getHamiltonian().diagonal()[idx]-state_two.getEnergy()
C6.append(energyshift*distance_atoms**6)
# Plot results
plt.plot(distance_surface/distance_atoms, np.abs(C6))
plt.xlim(min(distance_surface/distance_atoms), max(distance_surface/distance_atoms))
plt.xlabel("distance to surface / interatomic distance")
plt.ylabel("|C$_6$| (GHz $\mu m^6$)"); | doc/sphinx/examples_python/vdw_near_surface.ipynb | hmenke/pairinteraction | gpl-3.0 |
Simple Sounding
Use MetPy as straightforward as possible to make a Skew-T LogP plot. | import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import metpy.calc as mpcalc
from metpy.cbook import get_test_data
from metpy.plots import add_metpy_logo, SkewT
from metpy.units import units
# Change default to be better for skew-T
plt.rcParams['figure.figsize'] = (9, 9)
# Upper air data can be obtained using the siphon package, but for this example we will use
# some of MetPy's sample data.
col_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed']
df = pd.read_fwf(get_test_data('jan20_sounding.txt', as_file_obj=False),
skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names)
df['u_wind'], df['v_wind'] = mpcalc.wind_components(df['speed'],
np.deg2rad(df['direction']))
# Drop any rows with all NaN values for T, Td, winds
df = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed',
'u_wind', 'v_wind'), how='all').reset_index(drop=True) | v0.9/_downloads/ef4bfbf049be071a6c648d7918a50105/Simple_Sounding.ipynb | metpy/MetPy | bsd-3-clause |
We will pull the data out of the example dataset into individual variables and
assign units. | p = df['pressure'].values * units.hPa
T = df['temperature'].values * units.degC
Td = df['dewpoint'].values * units.degC
wind_speed = df['speed'].values * units.knots
wind_dir = df['direction'].values * units.degrees
u, v = mpcalc.wind_components(wind_speed, wind_dir)
skew = SkewT()
# Plot the data using normal plotting functions, in this case using
# log scaling in Y, as dictated by the typical meteorological plot
skew.plot(p, T, 'r')
skew.plot(p, Td, 'g')
skew.plot_barbs(p, u, v)
# Add the relevant special lines
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
skew.ax.set_ylim(1000, 100)
# Add the MetPy logo!
fig = plt.gcf()
add_metpy_logo(fig, 115, 100)
# Example of defining your own vertical barb spacing
skew = SkewT()
# Plot the data using normal plotting functions, in this case using
# log scaling in Y, as dictated by the typical meteorological plot
skew.plot(p, T, 'r')
skew.plot(p, Td, 'g')
# Set spacing interval--Every 50 mb from 1000 to 100 mb
my_interval = np.arange(100, 1000, 50) * units('mbar')
# Get indexes of values closest to defined interval
ix = mpcalc.resample_nn_1d(p, my_interval)
# Plot only values nearest to defined interval values
skew.plot_barbs(p[ix], u[ix], v[ix])
# Add the relevant special lines
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
skew.ax.set_ylim(1000, 100)
# Add the MetPy logo!
fig = plt.gcf()
add_metpy_logo(fig, 115, 100)
# Show the plot
plt.show() | v0.9/_downloads/ef4bfbf049be071a6c648d7918a50105/Simple_Sounding.ipynb | metpy/MetPy | bsd-3-clause |
Init SparkContext | from bigdl.dllib.nncontext import init_spark_on_local, init_spark_on_yarn
import numpy as np
import os
hadoop_conf_dir = os.environ.get('HADOOP_CONF_DIR')
if hadoop_conf_dir:
sc = init_spark_on_yarn(
hadoop_conf=hadoop_conf_dir,
conda_name=os.environ.get("ZOO_CONDA_NAME", "zoo"), # The name of the created conda-env
num_executors=2,
executor_cores=4,
executor_memory="2g",
driver_memory="2g",
driver_cores=1,
extra_executor_memory_for_ray="3g")
else:
sc = init_spark_on_local(cores = 8, conf = {"spark.driver.memory": "2g"})
# It may take a while to ditribute the local environment including python and java to cluster
import ray
from bigdl.orca.ray import OrcaRayContext
ray_ctx = OrcaRayContext(sc=sc, object_store_memory="4g")
ray_ctx.init()
#ray.init(num_cpus=30, include_webui=False, ignore_reinit_error=True) | apps/ray/parameter_server/sharded_parameter_server.ipynb | intel-analytics/BigDL | apache-2.0 |
A simple parameter server can be implemented as a Python class in a few lines of code.
EXERCISE: Make the ParameterServer class an actor. | dim = 10
@ray.remote
class ParameterServer(object):
def __init__(self, dim):
self.parameters = np.zeros(dim)
def get_parameters(self):
return self.parameters
def update_parameters(self, update):
self.parameters += update
ps = ParameterServer.remote(dim)
| apps/ray/parameter_server/sharded_parameter_server.ipynb | intel-analytics/BigDL | apache-2.0 |
A worker can be implemented as a simple Python function that repeatedly gets the latest parameters, computes an update to the parameters, and sends the update to the parameter server. | @ray.remote
def worker(ps, dim, num_iters):
for _ in range(num_iters):
# Get the latest parameters.
parameters = ray.get(ps.get_parameters.remote())
# Compute an update.
update = 1e-3 * parameters + np.ones(dim)
# Update the parameters.
ps.update_parameters.remote(update)
# Sleep a little to simulate a real workload.
time.sleep(0.5)
# Test that worker is implemented correctly. You do not need to change this line.
ray.get(worker.remote(ps, dim, 1))
# Start two workers.
worker_results = [worker.remote(ps, dim, 100) for _ in range(2)] | apps/ray/parameter_server/sharded_parameter_server.ipynb | intel-analytics/BigDL | apache-2.0 |
As the worker tasks are executing, you can query the parameter server from the driver and see the parameters changing in the background. | print(ray.get(ps.get_parameters.remote())) | apps/ray/parameter_server/sharded_parameter_server.ipynb | intel-analytics/BigDL | apache-2.0 |
Sharding a Parameter Server
As the number of workers increases, the volume of updates being sent to the parameter server will increase. At some point, the network bandwidth into the parameter server machine or the computation down by the parameter server may be a bottleneck.
Suppose you have $N$ workers and $1$ parameter server, and suppose each of these is an actor that lives on its own machine. Furthermore, suppose the model size is $M$ bytes. Then sending all of the parameters from the workers to the parameter server will mean that $N * M$ bytes in total are sent to the parameter server. If $N = 100$ and $M = 10^8$, then the parameter server must receive ten gigabytes, which, assuming a network bandwidth of 10 gigabits per second, would take 8 seconds. This would be prohibitive.
On the other hand, if the parameters are sharded (that is, split) across K parameter servers, K is 100, and each parameter server lives on a separate machine, then each parameter server needs to receive only 100 megabytes, which can be done in 80 milliseconds. This is much better.
EXERCISE: The code below defines a parameter server shard class. Modify this class to make ParameterServerShard an actor. We will need to revisit this code soon and increase num_shards. | @ray.remote
class ParameterServerShard(object):
def __init__(self, sharded_dim):
self.parameters = np.zeros(sharded_dim)
def get_parameters(self):
return self.parameters
def update_parameters(self, update):
self.parameters += update
total_dim = (10 ** 8) // 8 # This works out to 100MB (we have 25 million
# float64 values, which are each 8 bytes).
num_shards = 2 # The number of parameter server shards.
assert total_dim % num_shards == 0, ('In this exercise, the number of shards must '
'perfectly divide the total dimension.')
# Start some parameter servers.
ps_shards = [ParameterServerShard.remote(total_dim // num_shards) for _ in range(num_shards)]
assert hasattr(ParameterServerShard, 'remote'), ('You need to turn ParameterServerShard into an '
'actor (by using the ray.remote keyword).') | apps/ray/parameter_server/sharded_parameter_server.ipynb | intel-analytics/BigDL | apache-2.0 |
The code below implements a worker that does the following.
1. Gets the latest parameters from all of the parameter server shards.
2. Concatenates the parameters together to form the full parameter vector.
3. Computes an update to the parameters.
4. Partitions the update into one piece for each parameter server.
5. Applies the right update to each parameter server shard. | @ray.remote
def worker_task(total_dim, num_iters, *ps_shards):
# Note that ps_shards are passed in using Python's variable number
# of arguments feature. We do this because currently actor handles
# cannot be passed to tasks inside of lists or other objects.
for _ in range(num_iters):
# Get the current parameters from each parameter server.
parameter_shards = [ray.get(ps.get_parameters.remote()) for ps in ps_shards]
assert all([isinstance(shard, np.ndarray) for shard in parameter_shards]), (
'The parameter shards must be numpy arrays. Did you forget to call ray.get?')
# Concatenate them to form the full parameter vector.
parameters = np.concatenate(parameter_shards)
assert parameters.shape == (total_dim,)
# Compute an update.
update = np.ones(total_dim)
# Shard the update.
update_shards = np.split(update, len(ps_shards))
# Apply the updates to the relevant parameter server shards.
for ps, update_shard in zip(ps_shards, update_shards):
ps.update_parameters.remote(update_shard)
# Test that worker_task is implemented correctly. You do not need to change this line.
ray.get(worker_task.remote(total_dim, 1, *ps_shards)) | apps/ray/parameter_server/sharded_parameter_server.ipynb | intel-analytics/BigDL | apache-2.0 |
EXERCISE: Experiment by changing the number of parameter server shards, the number of workers, and the size of the data.
NOTE: Because these processes are all running on the same machine, network bandwidth will not be a limitation and sharding the parameter server will not help. To see the difference, you would need to run the application on multiple machines. There are still regimes where sharding a parameter server can help speed up computation on the same machine (by parallelizing the computation that the parameter server processes have to do). If you want to see this effect, you should implement a synchronous training application. In the asynchronous setting, the computation is staggered and so speeding up the parameter server usually does not matter. | num_workers = 4
# Start some workers. Try changing various quantities and see how the
# duration changes.
start = time.time()
ray.get([worker_task.remote(total_dim, 5, *ps_shards) for _ in range(num_workers)])
print('This took {} seconds.'.format(time.time() - start)) | apps/ray/parameter_server/sharded_parameter_server.ipynb | intel-analytics/BigDL | apache-2.0 |
Authentication
In order to run this tutorial successfully, we need to be authenticated first.
Depending on where we are running this notebook, the authentication steps may vary:
| Runner | Authentiction Steps |
| ----------- | ----------- |
| Local Computer | Use a service account, or run the following command: <br><br>gcloud auth login |
| Colab | Run the following python code and follow the instructions: <br><br>from google.colab import auth <br> auth.authenticate_user() |
| Vertext AI (Workbench) | Authentication is provided by Workbench | | try:
from google.colab import auth
print("Authenticating in Colab")
auth.authenticate_user()
print("Authenticated")
except: # noqa
print("This notebook is not running on Colab.")
print("Please make sure to follow the authentication steps.") | samples/tutorial.ipynb | llooker/public-datasets-pipelines | apache-2.0 |
Configurations
Let's make sure we enter the name of our GCP project in the next cell. | # ENTER THE GCP PROJECT HERE
gcp_project = "YOUR-GCP-PROJECT"
print(f"gcp_project is set to {gcp_project}")
def helper_function():
"""
Add a description about what this function does.
"""
return None | samples/tutorial.ipynb | llooker/public-datasets-pipelines | apache-2.0 |
Data Preparation
Query the Data | query = """
SELECT
created_date, category, complaint_type, neighborhood, latitude, longitude
FROM
`bigquery-public-data.san_francisco_311.311_service_requests`
LIMIT 1000;
"""
bqclient = bigquery.Client(project=gcp_project)
dataframe = bqclient.query(query).result().to_dataframe() | samples/tutorial.ipynb | llooker/public-datasets-pipelines | apache-2.0 |
Check the Dataframe | print(dataframe.shape)
dataframe.head() | samples/tutorial.ipynb | llooker/public-datasets-pipelines | apache-2.0 |
Process the Dataframe | # Convert the datetime to date
dataframe['created_date'] = dataframe['created_date'].apply(datetime.date) | samples/tutorial.ipynb | llooker/public-datasets-pipelines | apache-2.0 |
2.1 Remove Dups:
Write code to remove duplicates from an unsorted linked list.
FOLLOW UP
How would you solve this problem if a temporary buffer is not allowed? |
List = Node(1, Node(2, Node(3, Node(4, Node(4, Node(4, Node(3, Node(2, Node(1)))))))))
def remove_dups(List):
marks = {}
cur = List
prev = None
while cur != None:
if marks.get(cur.value, 0) == 0: # not duplicated
marks[cur.value] = 1
else: # duplicated
prev.next = cur.next
cur = prev
prev = cur
cur = cur.next
print('input:' + str(List))
remove_dups(List)
print('output:' + str(List))
def remove_dups_wo_buffer(List):
cur0 = List
while cur0 != None:
prev = cur0
cur1 = cur0.next
while cur1 != None:
if cur1.value == cur0.value:
prev.next = cur1.next
cur1 = prev
prev = cur1
cur1 = cur1.next
cur0 = cur0.next
List = Node(1, Node(2, Node(3, Node(4, Node(4, Node(4, Node(3, Node(2, Node(1, Node(3, Node(2)))))))))))
print('input:' + str(List))
remove_dups_wo_buffer(List)
print('output:' + str(List)) | Issues/algorithms/Linked Lists.ipynb | stereoboy/Study | mit |
2.2 Return Kth to Last:
Implement an algorithm to find the kth to last element of a singly linked list. | List = Node(1, Node(2, Node(3, Node(4, Node(4, Node(4, Node(3, Node(2, Node(1, Node(3, Node(2)))))))))))
def kth_to_last(List, k):
cur = List
size = 0
while cur != None:
size += 1
cur = cur.next
if size < k:
return None
cur = List
for _ in range(size - k):
cur = cur.next
return cur.value
print(kth_to_last(List, 4))
def kth_to_last(head, k, i):
if head == None:
return None
node = kth_to_last(head.next, k, i)
i[0] = i[0] + 1
if i[0] == k:
return head
else:
return node
print(kth_to_last(List, 4, [0])) | Issues/algorithms/Linked Lists.ipynb | stereoboy/Study | mit |
Generate a model
First we will generate a simple galaxy model using KinMS itself, that we can attempt to determine the parameters of later. If you have your own observed galaxy to fit then of course this step can be skipped!
The make_model function below creates a simple exponential disc:
$
\begin{align}
\large \Sigma_{H2}(r) \propto e^{\frac{-r}{d_{scale}}}
\end{align}
$
with a circular velocity profile which is parameterized using an arctan function:
$
\begin{align}
\large V(r) = \frac{2V_{flat}}{\pi} \arctan\left(\frac{r}{r_{turn}}\right)
\end{align}
$ | def make_model(param,obspars,rad,filename=None,plot=False):
'''
This function takes in the `param` array (along with obspars; the observational setup,
and a radius vector `rad`) and uses it to create a KinMS model.
'''
total_flux=param[0]
posAng=param[1]
inc=param[2]
v_flat=param[3]
r_turn=param[4]
scalerad=param[5]
### Here we use an exponential disk model for the surface brightness of the gas ###
sbprof = np.exp((-1)*rad/scalerad)
### We use a very simple arctan rotation curve model with two free parameters. ###
vel=(v_flat*2/np.pi)*np.arctan(rad/r_turn)
### This returns the model
return KinMS(obspars['xsize'],obspars['ysize'],obspars['vsize'],obspars['cellsize'],obspars['dv'],\
obspars['beamsize'],inc,sbProf=sbprof,sbRad=rad,velRad=rad,velProf=vel,\
intFlux=total_flux,posAng=posAng,fixSeed=True,fileName=filename).model_cube(toplot=plot)
| kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
Note that we have set fixSeed=True in the KinMS call - this is crucial if you are fitting with KinMS. It ensures if you generate two models with the same input parameters you will get an identical output model!
Now we have our model function, lets use it to generate a model which we will later fit. The first thing we need is to define the setup of our desired datacube (typically if you are fitting real data this will all be determined from the header keywords- see below). | ### Setup cube parameters ###
obspars={}
obspars['xsize']=64.0 # arcseconds
obspars['ysize']=64.0 # arcseconds
obspars['vsize']=500.0 # km/s
obspars['cellsize']=1.0 # arcseconds/pixel
obspars['dv']=20.0 # km/s/channel
obspars['beamsize']=np.array([4.0,4.0,0]) # [bmaj,bmin,bpa] in (arcsec, arcsec, degrees) | kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
We also need to create a radius vector- you ideally want this to oversample your pixel grid somewhat to avoid interpolation errors! | rad=np.arange(0,100,0.3) | kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
Now we have all the ingredients we can create our data to fit. Here we will also output the model to disc, so we can demonstrate how to read in the header keywords from real ALMA/VLA etc data. | '''
True values for the flux, posang, inc etc, as defined in the model function
'''
guesses=np.array([30.,270.,45.,200.,2.,5.])
'''
RMS of data. Here we are making our own model so this is arbitary.
When fitting real data this should be the observational RMS
'''
error=np.array(1e-3)
fdata=make_model(guesses,obspars,rad, filename="Test",plot=True) | kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
Read in the data
In this example we already have our data in memory. But if you are fitting a real datacube this wont be the case! Here we read in the model we just created from a FITS file to make it clear how to do this. | ### Load in your observational data ###
hdulist = fits.open('Test_simcube.fits',ignore_blank=True)
fdata = hdulist[0].data.T
### Setup cube parameters ###
obspars={}
obspars['cellsize']=np.abs(hdulist[0].header['cdelt1']*3600.) # arcseconds/pixel
obspars['dv']=np.abs(hdulist[0].header['cdelt3']/1e3) # km/s/channel
obspars['xsize']=hdulist[0].header['naxis1']*obspars['cellsize'] # arcseconds
obspars['ysize']=hdulist[0].header['naxis2']*obspars['cellsize'] # arcseconds
obspars['vsize']=hdulist[0].header['naxis3']*obspars['dv'] # km/s
obspars['beamsize']=np.array([hdulist[0].header['bmaj']*3600.,hdulist[0].header['bmin']*3600.,hdulist[0].header['bpa']])# [bmaj,bmin,bpa] in (arcsec, arcsec, degrees)
| kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
Fit the model
Now we have our 'observational' data read into memory, and a model function defined, we can fit one to the other! As our fake model is currently noiseless, lets add some gaussian noise (obviously dont do this if your data is from a real telecope!): | fdata+=(np.random.normal(size=fdata.shape)*error) | kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
Below we will proceed using the MCMC code GAStimator which was specifically designed to work with KinMS, however any minimiser should work in principle. For full details of how this code works, and a tutorial, see https://github.com/TimothyADavis/GAStimator . | from gastimator import gastimator,corner_plot
mcmc = gastimator(make_model,obspars,rad)
mcmc.labels=np.array(['Flux','posAng',"Inc","VFlat","R_turn","scalerad"])
mcmc.min=np.array([30.,1.,10,50,0.1,0.1])
mcmc.max=np.array([30.,360.,80,400,20,10])
mcmc.fixed=np.array([True,False,False,False,False,False])
mcmc.precision=np.array([1.,1.,1.,10,0.1,0.1])
mcmc.guesses=np.array([30.,275.,55.,210.,2.5,4.5]) #starting guesses, purposefully off! | kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
Setting good priors on the flux of your source is crucial to ensure the model outputs are physical. Luckily the integrated flux of your source should be easy to measure from your datacube! If you have a good measurement of this, then I would recommend forcing the total flux to that value by fixing it in the model (set mcmc.fixed=True for that parameter). If you can only get a guess then set as tight a prior as you can. This stops the model hiding bad fitting components below the noise level.
Its always a good idea to plot your model over your data before you start a fitting processes. That allows you to check that the model is reasonable, and tweak the parameters by hand to get good starting guesses. Firs you should generate a cube from your model function, then you can overplot it on your data using the simple plotting tool included with KinMS: | model=make_model(mcmc.guesses,obspars,rad) # make a model from your guesses
KinMS_plotter(fdata, obspars['xsize'], obspars['ysize'], obspars['vsize'], obspars['cellsize'],\
obspars['dv'], obspars['beamsize'], posang=guesses[1],overcube=model,rms=error).makeplots() | kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
As you can see, the black contours of the model arent a perfect match to the moment zero, spectrum and position-velocity diagram extracted from our "observed" datacube. One could tweak by hand, but as these are already close we can go on to do a fit!
If you are experimenting then running until convergence should be good enough to get an idea if the model is physical (setting a low number of iterations, ~3000 works for me). | outputvalue, outputll= mcmc.run(fdata,error,3000,plot=False) | kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
As you can see, the final parameters (listed in the output with their 1sigma errors) are pretty close to those we input! One could use the cornor_plot routine shipped with GAStimator to visualize our results, but with only 3000 steps (and a $\approx$30% acceptance rate) these wont be very pretty. If you need good error estimates/nice looking cornor plots for publication then I recommend at least 30,000 iterations, which may take several hours/days depending on your system, and the size of your datacube.
One can visualize the best-fit model again to check how we did - turns out pretty well! (Note the flux in the integrated spectrum isnt perfect, this is because of the masking of the noisy data). | bestmodel=make_model(np.median(outputvalue,1),obspars,rad) # make a model from your guesses
KinMS_plotter(fdata, obspars['xsize'], obspars['ysize'], obspars['vsize'], obspars['cellsize'],\
obspars['dv'], obspars['beamsize'], posang=guesses[1],overcube=bestmodel,rms=error).makeplots() | kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
Tiny error problem
I have found that fitting whole datacubes with kinematic modelling tools such as KinMS can yield unphysically small uncertanties, for instance constraining inclination to $\pm\approx0.1^{\circ}$ in the fit example performed above. This is essentially a form of model mismatch - you are finding the very best model of a given type that fits the data - and as you have a large number of free-parameters in a data cube you can find the best model (no matter how bad it is at actually fitting the data!) really well.
In works such as Smith et al. (2019) we have attempted to get around by taking into account the variance of the $\chi^2$ statistic.
As observed data are noisy, the $\chi^2$ statistic has an additional uncertainty associated with it, following the chi-squared distribution (Andrae 2010). This distribution has a variance of $2(N - P)$, where $N$ is the number of constraints and $P$ the number of inferred parameters. For fitting datacubes $N$ is very large, so the variance becomes $\approx2N$.
Systematic effects can produce variations of $\chi^2$ of the order of this variance, and ignoring this effect yields unrealistically small uncertainty estimates. In order to mitigate this effect van
den Bosch & van de Ven (2009) proposed to increase the $1\sigma$ confidence interval to $\Delta\chi^2=\sqrt{2N}$. To achieve the same effect within the Bayesian MCMC approach discussed above we need to scale the log-likelihood, by increasing the RMS estimate provided to GAStimator by $(2N)^{1/4}$. This approach appears to yield physically credible formal uncertainties in the inferred parameters, whereas otherwise these uncertainties are unphysically small.
Lets try that with the example above: | error*=((2.0*fdata.size)**(0.25))
outputvalue, outputll= mcmc.run(fdata,error,3000,plot=False) | kinms/docs/KinMSpy_tutorial.ipynb | TimothyADavis/KinMSpy | mit |
What is Monte Carlo (MC) Integration?
Let us say that we want to approximate the area between the curve defined by $f(x) = x^2 + 3x + \ln{x}$ between $x\in (0,5]$ and the x-axis. | def f(x):
return x**2 + 3*x + np.log(x)
step= 0.001
x = np.arange(1,5+step*0.1,step)
y = f(x)
print x.min(), x.max()
print y.min(), y.max()
plt.plot(x, y, lw=2., color="r")
plt.fill_between(x, 0, y, color="r", alpha=0.5)
plt.axhline(y=0, lw=1., color="k", linestyle="--")
plt.axhline(y=y.max(), lw=1., color="k", linestyle="--")
plt.axvline(x=x.min(), lw=1., color="k", linestyle="--")
plt.axvline(x=x.max(), lw=1., color="k", linestyle="--")
plt.xlabel("x")
plt.ylabel("y")
plt.title("$f(x) = x^2 + 3x + \ln{x}, x\in[1,5]$") | Monte Carlo Integration.ipynb | napsternxg/ipython-notebooks | apache-2.0 |
Concretely, we are interested in knowing the area of the red-shaded region in the above figure. Furthermore, I have also provided a rectangular bounding box for the range of values of $x$ and $y$. The true value of the area under the curve is $\sim{81.381}$ using its analytic integral formula (see http://www.wolframalpha.com/input/?i=integrate+x%5E2+%2B+3x+%2B+ln(x),+x+in+%5B1,5%5D).
The most accurate way to get the value of the area is to find the value of the definite integral $\int_{1}^{5} f(x) dx$. However, in many cases analytically finding this integral is very tough, especially if the function is not easily integrable. This is where numerical methods for approximating the integral come handy. Monte Carlo (MC) techniques are one of the most popular form of numerical solution used for definite integral calculation.
A basic intuition of the Monte Carlo Integration is as follows:
* Define the input domain $[a, b]$ of the integral $\int_{a}^{b} f(x) dx$.
* Uniformly, sample $N$ points from rectangular region between $[a, b)$ and $[\min(f(x)), \max(f(x)))$
* Find the proportion of points that lie in the region included in the area of $f(x)$, call it $p$
* Multiply the area of the rectangular region ($A$) by $p$ to get the area under the curve $A^=pA$
* As $N \to \infty$, the area of the shaded region $A^* \to \int_{a}^{b} f(x) dx$
* Usually, a much smaller value of $N$ will give approximate value within a reasonable error span.
Below, we will try to approximate the area of the curve using the MC integration method described above. We will use $N = 10^5$, and plot the points which fall in the region of the area in red and the other points in grey. | @jit
def get_MC_area(x, y, f, N=10**5, plot=False):
x_rands = x.min() + np.random.rand(N) * (x.max() - x.min())
y_rands = np.random.rand(N) * y.max()
y_true = f(x_rands)
integral_idx = (y_rands <= y_true)
if plot:
plt.plot(x_rands[integral_idx], y_rands[integral_idx],
alpha=0.3, color="r", linestyle='none',
marker='.', markersize=0.5)
plt.plot(x_rands[~integral_idx], y_rands[~integral_idx],
alpha=0.3, color="0.5", linestyle='none',
marker='.', markersize=0.5)
plt.axhline(y=0, lw=1., color="k", linestyle="--")
plt.axhline(y=y.max(), lw=1., color="k", linestyle="--")
plt.axvline(x=x.min(), lw=1., color="k", linestyle="--")
plt.axvline(x=x.max(), lw=1., color="k", linestyle="--")
plt.xlabel("x")
plt.ylabel("y")
plt.title("$f(x) = x^2 + 3x + \ln{x}, x\in[1,5]; N=%s$" % N)
print "Proportion points in space: %.3f" % (integral_idx).mean()
area = (integral_idx).mean() * (
(x_rands.max() - x_rands.min()) * (y_rands.max() - y_rands.min())
)
return area
area = get_MC_area(x, y, f, N=10**5, plot=True)
print "Area is: %.3f" % area | Monte Carlo Integration.ipynb | napsternxg/ipython-notebooks | apache-2.0 |
As we can observe, the number of points which fall inside the region of interest, are proportional to the area of the region. The area however, marginally close to the true area of $81.38$. Let us also try with a higher value of $N=10^7$ | area = get_MC_area(x, y, f, N=10**7, plot=True)
print "Area is: %.3f" % area | Monte Carlo Integration.ipynb | napsternxg/ipython-notebooks | apache-2.0 |
The above figure, shows that for $N=10^7$, the region covered by the sampled points is almost as smooth as the shaded region. Furthermore, the area is closer to the true value of $81.38$.
Now, let us also analyze, how the value of the calculated area changes with the order of number of sampled points. | for i in xrange(2,8):
area = get_MC_area(x, y, f, N=10**i, plot=False)
print i, area | Monte Carlo Integration.ipynb | napsternxg/ipython-notebooks | apache-2.0 |
Clearly, as the number of points increase, the area becomres closer to the true value.
Let us further examine this change by starting with $10^3$ points and then going all the way till $10^6$ points. | %%time
N_vals = 1000 + np.arange(1000)*1000
areas = np.zeros_like(N_vals, dtype="float")
for i, N in enumerate(N_vals):
area = get_MC_area(x, y, f, N=N, plot=False)
areas[i] = area
print "Mean area of last 100 points: %.3f" % np.mean(areas[-100:])
print "Areas of last 10 points: ", areas[-10:]
plt.plot(N_vals, areas, color="0.1", alpha=0.7)
plt.axhline(y=np.mean(areas[100:]), linestyle="--", lw=1., color="k")
plt.ylabel("Area")
plt.xlabel("Number of samples")
#plt.xscale("log") | Monte Carlo Integration.ipynb | napsternxg/ipython-notebooks | apache-2.0 |
3. Enter CM360 Segmentology Recipe Parameters
Wait for BigQuery->->->Census_Join to be created.
Join the StarThinker Assets Group to access the following assets
Copy CM360 Segmentology Sample. Leave the Data Source as is, you will change it in the next step.
Click Edit Connection, and change to BigQuery->->->Census_Join.
Or give these intructions to the client.
Modify the values below for your use case, can be done multiple times, then click play. | FIELDS = {
'account':'',
'auth_read':'user', # Credentials used for reading data.
'auth_write':'service', # Authorization used for writing data.
'recipe_name':'', # Name of report, not needed if ID used.
'date_range':'LAST_365_DAYS', # Timeframe to run report for.
'recipe_slug':'', # Name of Google BigQuery dataset to create.
'advertisers':[], # Comma delimited list of CM360 advertiser ids.
}
print("Parameters Set To: %s" % FIELDS)
| colabs/cm360_segmentology.ipynb | google/starthinker | apache-2.0 |
4. Execute CM360 Segmentology
This does NOT need to be modified unless you are changing the recipe, click play. | from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'dataset':{
'description':'Create a dataset for bigquery tables.',
'hour':[
4
],
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'dataset':{'field':{'name':'recipe_slug','kind':'string','suffix':'_Segmentology','description':'Place where tables will be created in BigQuery.'}}
}
},
{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing function.'}},
'function':'Pearson Significance Test',
'to':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','suffix':'_Segmentology','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}}
}
}
},
{
'google_api':{
'auth':'user',
'api':'dfareporting',
'version':'v3.4',
'function':'accounts.get',
'kwargs':{
'id':{'field':{'name':'account','kind':'integer','order':5,'default':'','description':'Campaign Manager Account ID'}},
'fields':'id,name'
},
'results':{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing function.'}},
'dataset':{'field':{'name':'recipe_slug','kind':'string','suffix':'_Segmentology','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'table':'CM360_Account'
}
}
}
},
{
'dcm':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':0,'default':'user','description':'Credentials used for reading data.'}},
'report':{
'filters':{
'advertiser':{
'values':{'field':{'name':'advertisers','kind':'integer_list','order':6,'default':[],'description':'Comma delimited list of CM360 advertiser ids.'}}
}
},
'account':{'field':{'name':'account','kind':'string','order':5,'default':'','description':'Campaign Manager Account ID'}},
'body':{
'name':{'field':{'name':'recipe_name','kind':'string','suffix':' Segmentology','description':'The report name.','default':''}},
'criteria':{
'dateRange':{
'kind':'dfareporting#dateRange',
'relativeDateRange':{'field':{'name':'date_range','kind':'choice','order':3,'default':'LAST_365_DAYS','choices':['LAST_7_DAYS','LAST_14_DAYS','LAST_30_DAYS','LAST_365_DAYS','LAST_60_DAYS','LAST_7_DAYS','LAST_90_DAYS','LAST_24_MONTHS','MONTH_TO_DATE','PREVIOUS_MONTH','PREVIOUS_QUARTER','PREVIOUS_WEEK','PREVIOUS_YEAR','QUARTER_TO_DATE','WEEK_TO_DATE','YEAR_TO_DATE'],'description':'Timeframe to run report for.'}}
},
'dimensions':[
{
'kind':'dfareporting#sortedDimension',
'name':'advertiserId'
},
{
'kind':'dfareporting#sortedDimension',
'name':'advertiser'
},
{
'kind':'dfareporting#sortedDimension',
'name':'zipCode'
}
],
'metricNames':[
'impressions',
'clicks',
'totalConversions'
]
},
'type':'STANDARD',
'delivery':{
'emailOwner':False
},
'format':'CSV'
}
}
}
},
{
'dcm':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':0,'default':'user','description':'Credentials used for reading data.'}},
'report':{
'account':{'field':{'name':'account','kind':'string','default':''}},
'name':{'field':{'name':'recipe_name','kind':'string','order':3,'suffix':' Segmentology','default':'','description':'Name of report, not needed if ID used.'}}
},
'out':{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Authorization used for writing data.'}},
'dataset':{'field':{'name':'recipe_slug','kind':'string','suffix':'_Segmentology','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'table':'CM360_KPI',
'header':True
}
}
}
},
{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Authorization used for writing data.'}},
'from':{
'query':'SELECT Id AS Partner_Id, Name AS Partner, Advertiser_Id, Advertiser, Zip_Postal_Code AS Zip, SAFE_DIVIDE(Impressions, SUM(Impressions) OVER(PARTITION BY Advertiser_Id)) AS Impression, SAFE_DIVIDE(Clicks, Impressions) AS Click, SAFE_DIVIDE(Total_Conversions, Impressions) AS Conversion, Impressions AS Impressions FROM `{dataset}.CM360_KPI` CROSS JOIN `{dataset}.CM360_Account` ',
'parameters':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','suffix':'_Segmentology','description':'Place where tables will be created in BigQuery.'}}
},
'legacy':False
},
'to':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','suffix':'_Segmentology','description':'Place where tables will be written in BigQuery.'}},
'view':'CM360_KPI_Normalized'
}
}
},
{
'census':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Authorization used for writing data.'}},
'normalize':{
'census_geography':'zip_codes',
'census_year':'2018',
'census_span':'5yr'
},
'to':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','suffix':'_Segmentology','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'type':'view'
}
}
},
{
'census':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Authorization used for writing data.'}},
'correlate':{
'join':'Zip',
'pass':[
'Partner_Id',
'Partner',
'Advertiser_Id',
'Advertiser'
],
'sum':[
'Impressions'
],
'correlate':[
'Impression',
'Click',
'Conversion'
],
'dataset':{'field':{'name':'recipe_slug','kind':'string','suffix':'_Segmentology','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'table':'CM360_KPI_Normalized',
'significance':80
},
'to':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','suffix':'_Segmentology','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'type':'view'
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
| colabs/cm360_segmentology.ipynb | google/starthinker | apache-2.0 |
Add your own dictionary | # Dict objects can also be used to check words against a custom list of correctly-spelled words
# known as a Personal Word List. This is simply a file listing the words to be considered, one word per line.
# The following example creates a Dict object for the personal word list stored in “mywords.txt”:
pwl = enchant.request_pwl_dict("../Data_nlp/mywords.txt")
pwl.check('pappapero'), pwl.suggest('cittin'), pwl.check('altro')
# PyEnchant also provides the class DictWithPWL which can be used to combine a language dictionary
# and a personal word list file:
d2 = enchant.DictWithPWL("it_IT", "../Data_nlp/mywords.txt")
d2.check('altro') & d2.check('pappapero'), d2.suggest('cittin')
%%timeit
d2.suggest('poliza') | .ipynb_checkpoints/NLP-checkpoint.ipynb | aborgher/Main-useful-functions-for-ML | gpl-3.0 |
check entire phrase | from enchant.checker import SpellChecker
chkr = SpellChecker("it_IT")
chkr.set_text("questo è un picclo esmpio per dire cm funziona")
for err in chkr:
print(err.word)
print(chkr.suggest(err.word))
print(chkr.word, chkr.wordpos)
chkr.replace('pippo')
chkr.get_text() | .ipynb_checkpoints/NLP-checkpoint.ipynb | aborgher/Main-useful-functions-for-ML | gpl-3.0 |
tokenization
As explained above, the module enchant.tokenize provides the ability to split text into its component words. The current implementation is based only on the rules for the English language, and so might not be completely suitable for your language of choice. Fortunately, it is straightforward to extend the functionality of this module.
To implement a new tokenization routine for the language TAG, simply create a class/function “tokenize” within the module “enchant.tokenize.TAG”. This function will automatically be detected by the module’s get_tokenizer function and used when appropriate. The easiest way to accomplish this is to copy the module “enchant.tokenize.en” and modify it to suit your needs. | from enchant.tokenize import get_tokenizer
tknzr = get_tokenizer("en_US") # not tak for it_IT up to now
[w for w in tknzr("this is some simple text")]
from enchant.tokenize import get_tokenizer, HTMLChunker
tknzr = get_tokenizer("en_US")
[w for w in tknzr("this is <span class='important'>really important</span> text")]
tknzr = get_tokenizer("en_US",chunkers=(HTMLChunker,))
[w for w in tknzr("this is <span class='important'>really important</span> text")]
from enchant.tokenize import get_tokenizer, EmailFilter
tknzr = get_tokenizer("en_US")
[w for w in tknzr("send an email to [email protected] please")]
tknzr = get_tokenizer("en_US", filters = [EmailFilter])
[w for w in tknzr("send an email to [email protected] please")] | .ipynb_checkpoints/NLP-checkpoint.ipynb | aborgher/Main-useful-functions-for-ML | gpl-3.0 |
Other modules:
- CmdLineChecker
The module enchant.checker.CmdLineChecker provides the class CmdLineChecker which can be used to interactively check the spelling of some text. It uses standard input and standard output to interact with the user through a command-line interface. The code below shows how to create and use this class from within a python application, along with a short sample checking session:
wxSpellCheckerDialog
The module enchant.checker.wxSpellCheckerDialog provides the class wxSpellCheckerDialog which can be used to interactively check the spelling of some text. The code below shows how to create and use such a dialog from within a wxPython application.
Word2vec
pip install gensim
pip install pyemd
https://radimrehurek.com/gensim/models/word2vec.html | import gensim, logging
from gensim.models import Word2Vec
model = gensim.models.KeyedVectors.load_word2vec_format(
'../Data_nlp/GoogleNews-vectors-negative300.bin.gz', binary=True)
model.doesnt_match("breakfast brian dinner lunch".split())
# give text with w1 w2 your_distance to check if model and w1-w2 have give the same distance
model.evaluate_word_pairs()
len(model.index2word)
# check accuracy against a premade grouped words
questions_words = model.accuracy('../Data_nlp/word2vec/trunk/questions-words.txt')
phrases_words = model.accuracy('../Data_nlp/word2vec/trunk/questions-phrases.txt')
questions_words[4]['incorrect']
print( model.n_similarity(['pasta'], ['spaghetti']) )
print( model.n_similarity(['pasta'], ['tomato']) )
print( model.n_similarity(['pasta'], ['car']) )
print( model.n_similarity(['cat'], ['dog']) )
model.similar_by_vector( model.word_vec('welcome') )
model.similar_by_word('welcome')
model.syn0[4,]
model.index2word[4]
model.word_vec('is')
model.syn0norm[4,]
model.vector_size
import numpy as np
model.similar_by_vector( (model.word_vec('Goofy') + model.word_vec('Minni'))/2 )
import pyemd
# This method only works if `pyemd` is installed (can be installed via pip, but requires a C compiler).
sentence_obama = 'Obama speaks to the media in Illinois'.lower().split()
sentence_president = 'The president greets the press in Chicago'.lower().split()
# Remove their stopwords.
import nltk
stopwords = nltk.corpus.stopwords.words('english')
sentence_obama = [w for w in sentence_obama if w not in stopwords]
sentence_president = [w for w in sentence_president if w not in stopwords]
# Compute WMD.
distance = model.wmdistance(sentence_obama, sentence_president)
print(distance)
import nltk
stopwords = nltk.corpus.stopwords.words('english')
def sentence_distance(s1, s2):
sentence_obama = [w for w in s1.split() if w not in stopwords]
sentence_president = [w for w in s2.split() if w not in stopwords]
print(sentence_obama, sentence_president, sep='\t')
print(model.wmdistance(sentence_obama, sentence_president), end='\n\n')
sentence_distance('I run every day in the morning', 'I like football')
sentence_distance('I run every day in the morning', 'I run since I was born')
sentence_distance('I run every day in the morning', 'you are idiot')
sentence_distance('I run every day in the morning', 'Are you idiot?')
sentence_distance('I run every day in the morning', 'Is it possible to die?')
sentence_distance('I run every day in the morning', 'Is it possible to die')
sentence_distance('I run every day in the morning', 'I run every day')
sentence_distance('I run every day in the morning', 'I eat every day')
sentence_distance('I run every day in the morning', 'I have breakfast in the morning')
sentence_distance('I run every day in the morning', 'I have breakfast every day in the morning')
sentence_distance('I run every day in the morning', 'Each day I run')
sentence_distance('I run every day in the morning', 'I run every day in the morning')
sentence_distance('I run every day in the morning', 'Each day I run')
sentence_distance('I run every day in the morning', 'Each I run')
sentence_distance('I run every day in the morning', 'Each day run')
sentence_distance('I run every day in the morning', 'Each day I')
sentence_distance('I every day in the morning', 'Each day I run')
sentence_distance('I run day in the morning', 'Each day I run')
sentence_distance('I run every in morning', 'Each day I run')
sentence_distance('I run every in', 'Each day I run')
def get_vect(w):
try:
return model.word_vec(w)
except KeyError:
return np.zeros(model.vector_size)
def calc_avg(s):
ws = [get_vect(w) for w in s.split() if w not in stopwords]
avg_vect = sum(ws)/len(ws)
return avg_vect
from scipy.spatial import distance
def get_euclidean(s1, s2):
return distance.euclidean(calc_avg(s1), calc_avg(s2))
# same questions
s1 = 'Astrology: I am a Capricorn Sun Cap moon and cap rising...what does that say about me?'
s2 = "I'm a triple Capricorn (Sun, Moon and ascendant in Capricorn) What does this say about me?"
sentence_distance(s1, s2)
print(get_euclidean(s1, s2))
# same questions as above without punctuations
s1 = 'Astrology I am a Capricorn Sun Cap moon and cap rising what does that say about me'
s2 = "I am a triple Capricorn Sun Moon and ascendant in Capricorn What does this say about me"
sentence_distance(s1, s2)
print(get_euclidean(s1, s2))
# same questions
s1 = 'What is best way to make money online'
s2 = 'What is best way to ask for money online?'
sentence_distance(s1,s2)
print(get_euclidean(s1, s2))
# different questions
s1 = 'How did Darth Vader fought Darth Maul in Star Wars Legends?'
s2 = 'Does Quora have a character limit for profile descriptions?'
sentence_distance(s1,s2)
print(get_euclidean(s1, s2))
# the order of the words doesn't change the distanace bewteeen the two phrases
s1ws = [w for w in s1.split() if w not in stopwords]
s2ws = [w for w in s2.split() if w not in stopwords]
print(model.wmdistance(s1ws, s2ws) )
print(model.wmdistance(s1ws[::-1], s2ws) )
print(model.wmdistance(s1ws, s2ws[::-1]) )
print(model.wmdistance(s1ws[3:]+s1ws[0:3], s2ws[::-1]) ) | .ipynb_checkpoints/NLP-checkpoint.ipynb | aborgher/Main-useful-functions-for-ML | gpl-3.0 |
conclusion:
- distance work well
- the order of the words is not taken into account
Translate using google translate
https://github.com/ssut/py-googletrans
should be free and unlimted, interned connection required
pip install googletrans | from googletrans import Translator
o = open("../AliceNelPaeseDelleMeraviglie.txt")
all = ''
for l in o: all += l
translator = Translator()
for i in range(42, 43, 1):
print(all[i * 1000:i * 1000 + 1000], end='\n\n')
print(translator.translate(all[i * 1000:i * 1000 + 1000], dest='en').text)
## if language is not passed it is guessed, so it can detect a language
frase = "Ciao Giulia, ti va un gelato?"
det = translator.detect(frase)
print("Languge:", det.lang, " with confidence:", det.confidence)
# command line usage, but it seems to don't work to me
!translate "veritas lux mea" -s la -d en
translations = translator.translate(
['The quick brown fox', 'jumps over', 'the lazy dog'], dest='ko')
for translation in translations:
print(translation.origin, ' -> ', translation.text)
phrase = translator.translate(frase, 'en')
phrase.origin, phrase.text, phrase.src, phrase.pronunciation, phrase.dest | .ipynb_checkpoints/NLP-checkpoint.ipynb | aborgher/Main-useful-functions-for-ML | gpl-3.0 |
TreeTagger usage to tag an italian (or other languages) sentence
How To install:
- nltk need to be already installed and working
- follow the instruction from http://www.cis.uni-muenchen.de/~schmid/tools/TreeTagger/
- run TreeTagger on terminal (echo 'Ciao Giulia come stai?' | tree-tagger-italian) to see if everything is working
- download the github to get the python support from: https://github.com/miotto/treetagger-python
- run /home/ale/anaconda3/bin/python setup.py install and everything should work (note that you need to specify which python you want, the default is python2)
Infos:
- The maximum character limit on a single text is 15k.
- this API does not guarantee that the library would work properly at all times
- for a more stability API use the non-free https://cloud.google.com/translate/docs/
- If you get HTTP 5xx error or errors like #6, it's probably because Google has banned your client IP address | from treetagger import TreeTagger
tt = TreeTagger(language='english')
tt.tag('What is the airspeed of an unladen swallow?')
tt = TreeTagger(language='italian')
tt.tag('Proviamo a vedere un pò se funziona bene questo tagger') | .ipynb_checkpoints/NLP-checkpoint.ipynb | aborgher/Main-useful-functions-for-ML | gpl-3.0 |
This analysis was done by DataKind DC on behalf of the Consumer Product Safety Commission. This serves as a preliminary study of the NEISS dataset. We have been been contact with the CPSC and figuring out what questions of importance that we can offer insight to. The questions that were analyzed were:
Are there products we should be aware of?
Are there differences between the sizes of hospitals?
Are there differences where race was reported or between different races?
Are there products we should be aware of?
To answer this question, I approached it two ways. One way is to tabulate the total number of producted queried by hospitals and another is to look at the top items reported by each item.
The top ten producted reported by hospitals are listed below. It appears that 1842 and 1807 are the top products that most hospital report. | data.data['product'].value_counts()[0:9] | reports/neiss.ipynb | minh5/cpsc | mit |
Looking further, I examine what hospitals report this the most, so we can examine hospitals that report these products the most. | data.get_hospitals_by_product('product_1842')
data.get_hospitals_by_product('product_1807') | reports/neiss.ipynb | minh5/cpsc | mit |
We can also view these as plots and compare the incident rates of these products through different hospitals | data.plot_product('product_1842')
data.plot_product('product_1807') | reports/neiss.ipynb | minh5/cpsc | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.