markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Note: If you see a "403 - Forbidden" error above, you still need to click "I understand and accept" on the competition rules page.
Three files are downloaded:
train.csv: training data (contains features and targets)
test.csv: feature data used to make predictions to send to Kaggle
gender_submission.csv: an example competition submission file
Step 1: Exploratory Data Analysis
Perform exploratory data analysis and data preprocessing. Use as many text and code blocks as you need to explore the data. Note any findings. Repair any data issues you find.
Student Solution | # Your code goes here | content/04_classification/04_classification_project/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
Step 2: The Model
Build, fit, and evaluate a classification model. Perform any model-specific data processing that you need to perform. If the toolkit you use supports it, create visualizations for loss and accuracy improvements. Use as many text and code blocks as you need to explore the data. Note any findings.
Student Solution | # Your code goes here | content/04_classification/04_classification_project/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
Step 3: Make Predictions and Upload To Kaggle
In this step you will make predictions on the features found in the test.csv file and upload them to Kaggle using the Kaggle API. Use as many text and code blocks as you need to explore the data. Note any findings.
Student Solution | # Your code goes here | content/04_classification/04_classification_project/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
What was your Kaggle score?
Record your score here
Step 4: Iterate on Your Model
In this step you're encouraged to play around with your model settings and to even try different models. See if you can get a better score. Use as many text and code blocks as you need to explore the data. Note any findings.
Student Solution | # Your code goes here | content/04_classification/04_classification_project/colab.ipynb | google/applied-machine-learning-intensive | apache-2.0 |
문자열 배열도 가능하지면 모든 원소의 문자열 크기가 같아야 한다. 만약 더 큰 크기의 문자열을 할당하면 잘릴 수 있다. | c = np.zeros(5, dtype="S4")
c[0] = "abcd"
c[1] = "ABCDE"
c | Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/2) NumPy 배열 생성과 변형.ipynb | junhwanjang/DataSchool | mit |
1이 아닌 0으로 초기화된 배열을 생성하려면 ones 명령을 사용한다. | d = np.ones((2,3,4), dtype="i8")
d | Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/2) NumPy 배열 생성과 변형.ipynb | junhwanjang/DataSchool | mit |
stack 명령은 새로운 차원(축으로) 배열을 연결하며 당연히 연결하고자 하는 배열들의 크기가 모두 같아야 한다.
axis 인수(디폴트 0)를 사용하여 연결후의 회전 방향을 정한다. | np.stack([c1, c2])
np.stack([c1, c2], axis=1) | Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/2) NumPy 배열 생성과 변형.ipynb | junhwanjang/DataSchool | mit |
그리드 생성
변수가 2개인 2차원 함수의 그래프를 그리거나 표를 작성하려면 많은 좌표를 한꺼번에 생성하여 각 좌표에 대한 함수 값을 계산해야 한다.
예를 들어 x, y 라는 두 변수를 가진 함수에서 x가 0부터 2까지, y가 0부터 4까지의 사각형 영역에서 변화하는 과정을 보고 싶다면 이 사각형 영역 안의 다음과 같은 (x,y) 쌍 값들에 대해 함수를 계산해야 한다.
$$ (x,y) = (0,0), (0,1), (0,2), (0,3), (0,4), (1,0), \cdots (2,4) $$
이러한 과정을 자동으로 해주는 것이 NumPy의 meshgrid 명령이다. meshgrid 명령은 사각형 영역을 구성하는 가로축의 점들과 세로축의 점을 나타내는 두 벡터를 인수로 받아서 이 사각형 영역을 이루는 조합을 출력한다. 단 조합이 된 (x,y)쌍을 x값만을 표시하는 행렬과 y값만을 표시하는 행렬 두 개로 분리하여 출력한다. | x = np.arange(3)
x
y = np.arange(5)
y
X, Y = np.meshgrid(x, y)
X
Y
[zip(x, y) for x, y in zip(X, Y)]
plt.scatter(X, Y, linewidths=10); | Lecture/05. 기초 선형 대수 1 - 행렬의 정의와 연산/2) NumPy 배열 생성과 변형.ipynb | junhwanjang/DataSchool | mit |
トレーニングのチェックポイント
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/guide/checkpoint"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td>Google Colab で実行</td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/checkpoint.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/checkpoint.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
「TensorFlow のモデルを保存する」という言いまわしは通常、次の 2 つのいずれかを意味します。
チェックポイント、または
保存されたモデル(SavedModel)
チェックポイントは、モデルで使用されるすべてのパラメータ(tf.Variableオブジェクト)の正確な値をキャプチャします。チェックポイントにはモデルで定義された計算のいかなる記述も含まれていないため、通常は、保存されたパラメータ値を使用するソースコードが利用可能な場合に限り有用です。
一方、SavedModel 形式には、パラメータ値(チェックポイント)に加え、モデルで定義された計算のシリアライズされた記述が含まれています。この形式のモデルは、モデルを作成したソースコードから独立しています。したがって、TensorFlow Serving、TensorFlow Lite、TensorFlow.js、または他のプログラミング言語のプログラム(C、C++、Java、Go、Rust、C# などの TensorFlow API)を介したデプロイに適しています。
このガイドでは、チェックポイントの書き込みと読み取りを行う API について説明します。
セットアップ | import tensorflow as tf
class Net(tf.keras.Model):
"""A simple linear model."""
def __init__(self):
super(Net, self).__init__()
self.l1 = tf.keras.layers.Dense(5)
def call(self, x):
return self.l1(x)
net = Net() | site/ja/guide/checkpoint.ipynb | tensorflow/docs-l10n | apache-2.0 |
tf.kerasトレーニング API から保存する
tf.kerasの保存と復元に関するガイドをご覧ください。
tf.keras.Model.save_weightsで TensorFlow チェックポイントを保存します。 | net.save_weights('easy_checkpoint') | site/ja/guide/checkpoint.ipynb | tensorflow/docs-l10n | apache-2.0 |
チェックポイントを記述する
TensorFlow モデルの永続的な状態は、tf.Variableオブジェクトに格納されます。これらは直接作成できますが、多くの場合はtf.keras.layersやtf.keras.Modelなどの高レベル API を介して作成されます。
変数を管理する最も簡単な方法は、変数を Python オブジェクトにアタッチし、それらのオブジェクトを参照することです。
tf.train.Checkpoint、tf.keras.layers.Layerおよびtf.keras.Modelのサブクラスは、属性に割り当てられた変数を自動的に追跡します。以下の例では、単純な線形モデルを作成し、モデルのすべての変数の値を含むチェックポイントを記述します。
Model.save_weightsで、モデルチェックポイントを簡単に保存できます。
手動チェックポイント
セットアップ
tf.train.Checkpoint のすべての機能を実演するために、トイデータセットと最適化ステップを次のように定義します。 | def toy_dataset():
inputs = tf.range(10.)[:, None]
labels = inputs * 5. + tf.range(5.)[None, :]
return tf.data.Dataset.from_tensor_slices(
dict(x=inputs, y=labels)).repeat().batch(2)
def train_step(net, example, optimizer):
"""Trains `net` on `example` using `optimizer`."""
with tf.GradientTape() as tape:
output = net(example['x'])
loss = tf.reduce_mean(tf.abs(output - example['y']))
variables = net.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return loss | site/ja/guide/checkpoint.ipynb | tensorflow/docs-l10n | apache-2.0 |
チェックポイントオブジェクトを作成する
チェックポイントを手動で作成するには、tf.train.Checkpoint オブジェクトを使用します。チェックポイントを設定するオブジェクトは、オブジェクトの属性として設定されます。
tf.train.CheckpointManagerは、複数のチェックポイントの管理にも役立ちます。 | opt = tf.keras.optimizers.Adam(0.1)
dataset = toy_dataset()
iterator = iter(dataset)
ckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator)
manager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3) | site/ja/guide/checkpoint.ipynb | tensorflow/docs-l10n | apache-2.0 |
モデルをトレーニングおよびチェックポイントする
次のトレーニングループは、モデルとオプティマイザのインスタンスを作成し、それらをtf.train.Checkpointオブジェクトに集めます。それはデータの各バッチのループ内でトレーニングステップを呼び出し、定期的にチェックポイントをディスクに書き込みます。 | def train_and_checkpoint(net, manager):
ckpt.restore(manager.latest_checkpoint)
if manager.latest_checkpoint:
print("Restored from {}".format(manager.latest_checkpoint))
else:
print("Initializing from scratch.")
for _ in range(50):
example = next(iterator)
loss = train_step(net, example, opt)
ckpt.step.assign_add(1)
if int(ckpt.step) % 10 == 0:
save_path = manager.save()
print("Saved checkpoint for step {}: {}".format(int(ckpt.step), save_path))
print("loss {:1.2f}".format(loss.numpy()))
train_and_checkpoint(net, manager) | site/ja/guide/checkpoint.ipynb | tensorflow/docs-l10n | apache-2.0 |
復元してトレーニングを続ける
最初のトレーニングサイクルの後、新しいモデルとマネージャーを渡すことができますが、トレーニングはやめた所から再開します。 | opt = tf.keras.optimizers.Adam(0.1)
net = Net()
dataset = toy_dataset()
iterator = iter(dataset)
ckpt = tf.train.Checkpoint(step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator)
manager = tf.train.CheckpointManager(ckpt, './tf_ckpts', max_to_keep=3)
train_and_checkpoint(net, manager) | site/ja/guide/checkpoint.ipynb | tensorflow/docs-l10n | apache-2.0 |
tf.train.CheckpointManagerオブジェクトは古いチェックポイントを削除します。上記では、最新の 3 つのチェックポイントのみを保持するように構成されています。 | print(manager.checkpoints) # List the three remaining checkpoints | site/ja/guide/checkpoint.ipynb | tensorflow/docs-l10n | apache-2.0 |
これらのパス、例えば'./tf_ckpts/ckpt-10'などは、ディスク上のファイルではなく、indexファイルのプレフィックスで、変数値を含む 1 つまたはそれ以上のデータファイルです。これらのプレフィックスは、まとめて単一のcheckpointファイル('./tf_ckpts/checkpoint')にグループ化され、CheckpointManagerがその状態を保存します。 | !ls ./tf_ckpts | site/ja/guide/checkpoint.ipynb | tensorflow/docs-l10n | apache-2.0 |
<a id="loading_mechanics"></a>
読み込みの仕組み
TensorFlowは、読み込まれたオブジェクトから始めて、名前付きエッジを持つ有向グラフを走査することにより、変数をチェックポイントされた値に合わせます。エッジ名は通常、オブジェクトの属性名に由来しており、self.l1 = tf.keras.layers.Dense(5)の"l1"などがその例です。tf.train.Checkpointは、tf.train.Checkpoint(step=...)の"step"のように、キーワード引数名を使用します。
上記の例の依存関係グラフは次のようになります。
オプティマイザは赤、通常の変数は青、オプティマイザスロット変数はオレンジで表されています。tf.train.Checkpoint を表すノードなどは黒で示されています。
オプティマイザは赤色、通常変数は青色、オプティマイザスロット変数はオレンジ色です。他のノード、例えばtf.train.Checkpointを表すものは黒色です。
tf.train.Checkpoint オブジェクトで restore を読み出すと、リクエストされた復元がキューに入れられ、Checkpoint オブジェクトから一致するパスが見つかるとすぐに変数値が復元されます。たとえば、ネットワークとレイヤーを介してバイアスのパスを再構築すると、上記で定義したモデルからそのバイアスのみを読み込むことができます。 | to_restore = tf.Variable(tf.zeros([5]))
print(to_restore.numpy()) # All zeros
fake_layer = tf.train.Checkpoint(bias=to_restore)
fake_net = tf.train.Checkpoint(l1=fake_layer)
new_root = tf.train.Checkpoint(net=fake_net)
status = new_root.restore(tf.train.latest_checkpoint('./tf_ckpts/'))
print(to_restore.numpy()) # We get the restored value now | site/ja/guide/checkpoint.ipynb | tensorflow/docs-l10n | apache-2.0 |
これらの新しいオブジェクトの依存関係グラフは、上で書いたより大きなチェックポイントのはるかに小さなサブグラフです。 これには、バイアスと tf.train.Checkpoint がチェックポイントに番号付けするために使用する保存カウンタのみが含まれます。
restore は、オプションのアサーションを持つステータスオブジェクトを返します。新しい Checkpoint で作成されたすべてのオブジェクトが復元されるため、status.assert_existing_objects_matched がパスとなります。 | status.assert_existing_objects_matched() | site/ja/guide/checkpoint.ipynb | tensorflow/docs-l10n | apache-2.0 |
チェックポイントには、レイヤーのカーネルやオプティマイザの変数など、一致しない多くのオブジェクトがあります。status.assert_consumed() は、チェックポイントとプログラムが正確に一致する場合に限りパスするため、ここでは例外がスローされます。
復元延期 (Deferred restoration)
TensorFlow のLayerオブジェクトは、入力形状が利用可能な場合、最初の呼び出しまで変数の作成を遅らせる可能性があります。例えば、Denseレイヤーのカーネルの形状はレイヤーの入力形状と出力形状の両方に依存するため、コンストラクタ引数として必要な出力形状は、単独で変数を作成するために充分な情報ではありません。Layerの呼び出しは変数の値も読み取るため、復元は変数の作成とその最初の使用の間で発生する必要があります。
このイディオムをサポートするために、tf.train.Checkpoint は一致する変数がまだない場合、復元を延期します。 | deferred_restore = tf.Variable(tf.zeros([1, 5]))
print(deferred_restore.numpy()) # Not restored; still zeros
fake_layer.kernel = deferred_restore
print(deferred_restore.numpy()) # Restored | site/ja/guide/checkpoint.ipynb | tensorflow/docs-l10n | apache-2.0 |
チェックポイントを手動で検査する
tf.train.load_checkpoint は、チェックポイントのコンテンツにより低いレベルのアクセスを提供する CheckpointReader を返します。これには各変数のキーからチェックポイントの各変数の形状と dtype へのマッピングが含まれます。変数のキーは上に表示されるグラフのようなオブジェクトパスです。
注意: チェックポイントへのより高いレベルの構造はありません。変数のパスと値のみが認識されており、models、layers、またはそれらがどのように接続されているかについての概念が一切ありません。 | tf.train.list_variables(tf.train.latest_checkpoint('./tf_ckpts/')) | site/ja/guide/checkpoint.ipynb | tensorflow/docs-l10n | apache-2.0 |
net.l1.kernel の値に関心がある場合は、次のコードを使って値を取得できます。 | key = 'net/l1/kernel/.ATTRIBUTES/VARIABLE_VALUE'
print("Shape:", shape_from_key[key])
print("Dtype:", dtype_from_key[key].name) | site/ja/guide/checkpoint.ipynb | tensorflow/docs-l10n | apache-2.0 |
また、変数の値を検査できるようにする get_tensor メソッドも提供されています。 | reader.get_tensor(key) | site/ja/guide/checkpoint.ipynb | tensorflow/docs-l10n | apache-2.0 |
オブジェクトの追跡
self.l1 = tf.keras.layers.Dense(5)のような直接の属性割り当てと同様に、リストとディクショナリを属性に割り当てると、それらの内容を追跡します。
self.l1 = tf.keras.layers.Dense(5)のような直接の属性割り当てと同様に、リストとディクショナリを属性に割り当てると、それらの内容を追跡します。 | save = tf.train.Checkpoint()
save.listed = [tf.Variable(1.)]
save.listed.append(tf.Variable(2.))
save.mapped = {'one': save.listed[0]}
save.mapped['two'] = save.listed[1]
save_path = save.save('./tf_list_example')
restore = tf.train.Checkpoint()
v2 = tf.Variable(0.)
assert 0. == v2.numpy() # Not restored yet
restore.mapped = {'two': v2}
restore.restore(save_path)
assert 2. == v2.numpy() | site/ja/guide/checkpoint.ipynb | tensorflow/docs-l10n | apache-2.0 |
リストとディクショナリのラッパーオブジェクトにお気づきでしょうか。これらのラッパーは基礎的なデータ構造のチェックポイント可能なバージョンです。属性に基づく読み込みと同様に、これらのラッパーは変数の値がコンテナに追加されるとすぐにそれを復元します。 | restore.listed = []
print(restore.listed) # ListWrapper([])
v1 = tf.Variable(0.)
restore.listed.append(v1) # Restores v1, from restore() in the previous cell
assert 1. == v1.numpy() | site/ja/guide/checkpoint.ipynb | tensorflow/docs-l10n | apache-2.0 |
Leave One Out Cross Validation(LOOCV) | from sklearn.linear_model import LinearRegression
from sklearn.cross_validation import LeaveOneOut
from sklearn.metrics import mean_squared_error
clf = LinearRegression()
loo = LeaveOneOut(len(auto_df))
#loo提供了训练和测试的索引
X = auto_df[['horsepower']].values
y = auto_df['mpg'].values
n = np.shape(X)[0]
mses =[]
for train, test in loo:
Xtrain,ytrain,Xtest,ytest = X[train],y[train],X[test],y[test]
clf.fit(Xtrain,ytrain)
ypred = clf.predict(Xtest)
mses.append(mean_squared_error(ytest,ypred))
np.mean(mses)
def loo_shortcut(X,y):
clf = LinearRegression()
clf.fit(X,y)
ypred = clf.predict(X)
xbar = np.mean(X,axis =0)
xsum = np.sum(np.power(X-xbar,2))
nrows = np.shape(X)[0]
mses = []
for row in range(0,nrows):
hi = (1 / nrows) + (np.sum(X[row] - xbar) ** 2 / xsum)
mse = ((y[row] - ypred[row])/(1-hi))**2
mses.append(mse)
return np.mean(mses)
loo_shortcut(auto_df[['horsepower']].values,auto_df['mpg'].values) | basic/Cross-Validation and Bootstrap.ipynb | IgorWang/MachineLearningPracticer | gpl-3.0 |
$$CV_{(n)} = \frac {1} {n} \sum_{i =1}^n (\frac{y_i - \hat y_i}{1- h_i})^2$$
$$ h_i = \frac {1}{h} + \frac{(x_i - \bar x)^2}{\sum_{i'=1} ^n (x_i' - \bar x)^2 }$$ | # LOOCV 应用于同一种模型不同复杂度的选择
auto_df['horsepower^2'] = auto_df['horsepower'] * auto_df['horsepower']
auto_df['horsepower^3'] = auto_df['horsepower^2'] * auto_df['horsepower']
auto_df['horsepower^4'] = auto_df['horsepower^3'] * auto_df['horsepower']
auto_df['horsepower^5'] = auto_df['horsepower^4'] * auto_df['horsepower']
auto_df['unit'] = 1
colnames = ["unit", "horsepower", "horsepower^2", "horsepower^3", "horsepower^4", "horsepower^5"]
cv_errors = []
for ncols in range(2,6):
X = auto_df[colnames[0:ncols]]
y = auto_df['mpg']
clf = LinearRegression()
clf.fit(X,y)
cv_errors.append(loo_shortcut(X.values,y.values))
plt.plot(range(1,5),cv_errors)
plt.xlabel('degree')
plt.ylabel('cv.error') | basic/Cross-Validation and Bootstrap.ipynb | IgorWang/MachineLearningPracticer | gpl-3.0 |
K-Fold Cross Validation | from sklearn.cross_validation import KFold
cv_errors = []
for ncols in range(2,6):
X = auto_df[colnames[0:ncols]].values
y = auto_df['mpg'].values
kfold = KFold(len(auto_df),n_folds = 10)
mses =[]
for train,test in kfold:
Xtrain,ytrain,Xtest,ytest = X[train],y[train],X[test],y[test]
clf.fit(X,y)
ypred = clf.predict(Xtest)
mses.append(mean_squared_error(ypred,ytest))
cv_errors.append(np.mean(mses))
plt.plot(range(1,5),cv_errors)
plt.xlabel("degree")
plt.ylabel('cv.error') | basic/Cross-Validation and Bootstrap.ipynb | IgorWang/MachineLearningPracticer | gpl-3.0 |
Bootstrap | from sklearn.cross_validation import Bootstrap
cv_errors = []
for ncols in range(2,6):
X = auto_df[colnames[0:ncols]].values
y = auto_df['mpg'].values
n = len(auto_df)
bs = Bootstrap(n,train_size=int(0.9*n),test_size=int(0.1*n),n_iter=10,random_state=0)
mses = []
for train,test in bs:
Xtrain,ytrain,Xtest,ytest = X[train],y[train],X[test],y[test]
clf = LinearRegression()
clf.fit(X,y)
ypred = clf.predict(Xtest)
mses.append(mean_squared_error(ypred,ytest))
cv_errors.append(np.mean(mses))
plt.plot(range(1,5),cv_errors)
plt.xlabel('degree')
plt.ylabel('cv.error') | basic/Cross-Validation and Bootstrap.ipynb | IgorWang/MachineLearningPracticer | gpl-3.0 |
One of the main things that we want to do in scientific computing is get data into and out of our programs. In addition to plain text files, there are modules that can read lots of different data formats we might encounter.
Print
We've already been using print quite a bit, but now we'll look at how to control how information is printed. Note that there is an older and newer way to format print statements -- we'll focus only on the newer way (it's nicer).
This is compatible with both python 2 and 3 | x = 1
y = 0.0000354
z = 3.0
s = "my string"
print(x) | extra/python-io.ipynb | sbu-python-summer/python-tutorial | bsd-3-clause |
We write a string with {} embedded to indicate where variables are to be inserted. Note that {} can take arguments. We use the format() method on the string to match the variables to the {}. | print("x = {}, y = {}, z = {}, s = {}".format(x, y, z, s)) | extra/python-io.ipynb | sbu-python-summer/python-tutorial | bsd-3-clause |
Before a semi-colon, we can give an optional index/position/descriptor of the value we want to print.
After the semi-colon we give a format specifier. It has a number field and a type, like f and g to describe how floating point numbers appear and how much precision to show. Other bits are possible as well (like justification). | print("x = {0}, y = {1:10.5g}, z = {2:.3f}, s = {3}".format(x, y, z, s)) | extra/python-io.ipynb | sbu-python-summer/python-tutorial | bsd-3-clause |
there are other formatting things, like justification, etc. See the tutorial | print("{:^80}".format("centered string")) | extra/python-io.ipynb | sbu-python-summer/python-tutorial | bsd-3-clause |
File I/O
as expected, a file is an object. Here we'll use the try, except block to capture exceptions (like if the file cannot be opened). | f = open("./sample.txt", "w")
print(f)
f.write("this is my first write\n")
f.close() | extra/python-io.ipynb | sbu-python-summer/python-tutorial | bsd-3-clause |
we can easily loop over the lines in a file | f = open("./test.txt", "r")
for line in f:
print(line.split())
f.close() | extra/python-io.ipynb | sbu-python-summer/python-tutorial | bsd-3-clause |
as mentioned earlier, there are lots of string functions. Above we used strip() to remove the trailing whitespace and returns
CSV Files
comma-separated values are an easy way to exchange data -- you can generate these from a spreadsheet program. In the example below, we are assuming that the first line of the spreadsheet/csv file gives the headings that identify the columns.
Note that there is an amazing amount of variation in terms of what can be in a CSV file and what the format is -- the csv module does a good job sorting this all out for you. | import csv
reader = csv.reader(open("shopping.csv", "r"))
headings = None
items = []
quantity = []
unit_price = []
total = []
for row in reader:
if headings == None:
# first row
headings = row
else:
items.append(row[headings.index("item")])
quantity.append(row[headings.index("quantity")])
unit_price.append(row[headings.index("unit price")])
total.append(row[headings.index("total")])
for i, q in zip(items, quantity):
print ("item: {}, quantity: {}".format(i, q))
| extra/python-io.ipynb | sbu-python-summer/python-tutorial | bsd-3-clause |
Last 24 hours: | # reading is once a minute, so take last 24 * 60 readings
def plotem(data, n=-60):
if n < 0:
start = n
end = len(data)
else:
start = 0
end = n
data[['temp', 'altitude', 'humidity']][n:].plot(subplots=True)
plotem(data, -24*60)
data.altitude[-8*60:].plot() | posts/latest-weather-pijessie.ipynb | peakrisk/peakrisk | gpl-3.0 |
Last week | # reading is once a minute, so take last 7 * 24 * 60 readings
plotem(data, -7*24*60)
plotem(data) | posts/latest-weather-pijessie.ipynb | peakrisk/peakrisk | gpl-3.0 |
Look at all the data | data.describe()
data.tail() | posts/latest-weather-pijessie.ipynb | peakrisk/peakrisk | gpl-3.0 |
I currently have two temperature sensors:
DHT22 sensor which gives temperature and humidity.
BMP180 sensor which gives pressure and temperature.
The plot below shows the two temperature plots.
Both these sensors are currently in my study. For temperature and humidity I would like to have some readings from outside. If I can solder them to a phone jack then I can just run phone cable to where they need to be.
Below plots the current values from these sensors. This is handy for calibration. | data[['temp', 'temp_dht']].plot() | posts/latest-weather-pijessie.ipynb | peakrisk/peakrisk | gpl-3.0 |
Dew Point
The warmer air is, the more moisture it can hold. The dew point is
the temperature at which air would be totally saturated if it had as
much moisture as it currently does.
Given the temperature and humidity the dew point can be calculated, the actual formula is
pretty complex.
It is explained in more detail here: http://iridl.ldeo.columbia.edu/dochelp/QA/Basic/dewpoint.html
If you are interested in a simpler calculation that gives an approximation of dew point temperature if you know >the observed temperature and relative humidity, the following formula was proposed in a 2005 article by Mark G. >Lawrence in the Bulletin of the American Meteorological Society:
$$Td = T - ((100 - RH)/5.)$$ | data['dewpoint'] = data.temp - ((100. - data.humidity)/5.)
data[['temp', 'dewpoint', 'humidity']].plot()
data[['temp', 'dewpoint', 'humidity']].plot(subplots=True)
data[['temp', 'dewpoint']].plot()
data.altitude.plot() | posts/latest-weather-pijessie.ipynb | peakrisk/peakrisk | gpl-3.0 |
Generator
Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.
You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:
Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3.
Exercise: Build the transposed convolutional network for the generator in the function below. Be sure to use leaky ReLUs on all the layers except for the last tanh layer, as well as batch normalization on all the transposed convolutional layers except the last one. | def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x1 = tf.layers.dense(z, 4*4*512)
x1 = tf.reshape(x1, )
# Output layer, 32x32x3
logits =
out = tf.tanh(logits)
return out | dcgan-svhn/DCGAN_Exercises.ipynb | brandoncgay/deep-learning | mit |
Here is what's in the file:
- 4 dimensions: station, member, time and nchar
- Variables:
- station(station) == station_id
- member(member)
- time(time) Time in s since 1.1.1970
- t2m_fc(time, member, station)
- t2m_obs(time, station)
- station_alt(station) Altitute of station in m
- station_lat(station) Latitude in degrees
- station_lon(station) Longitude in degrees
- station_id(station) == station
- station_loc(station, nchar) Location name
So how much training data do we have? | # Total amount of data
rg.dimensions['station'].size * rg.dimensions['time'].size / 2
# Rough data amount per month
rg.dimensions['station'].size * rg.dimensions['time'].size / 2 / 12.
# Per station per month
rg.dimensions['time'].size / 2 / 12. | data_exploration/python_data_handling.ipynb | slerch/ppnn | mit |
Ok, let's now look at some of the variables.
1.1 Time | time = rg.variables['time']
time
time[:5] | data_exploration/python_data_handling.ipynb | slerch/ppnn | mit |
In fact, the time is given in seconds rather than hours. | # convert back to dates (http://unidata.github.io/netcdf4-python/#section7)
from netCDF4 import num2date
dates = num2date(time[:],units='seconds since 1970-01-01 00:00 UTC')
dates[:5] | data_exploration/python_data_handling.ipynb | slerch/ppnn | mit |
So dates are in 12 hour intervals. Which means that since we downloaded 36/48h forecasts: the 12UTC dates correspond to the 36 hour fcs and the following 00UTC dates correspond to the same forecast at 48 hour lead time.
1.2 Station variables
Station and station ID are in fact the same and simply contain a number, which does not start at one and is not continuous. | import numpy as np
# Check whether the two variables are equal
np.array_equal(rg.variables['station'][:], rg.variables['station_id'][:])
# Then just print the first 5
rg.variables['station'][:5] | data_exploration/python_data_handling.ipynb | slerch/ppnn | mit |
station_alt contains the station altitude in meters. | rg.variables['station_alt'][:5]
rg.variables['station_loc'][0].data | data_exploration/python_data_handling.ipynb | slerch/ppnn | mit |
Ahhhh, Aachen :D
So this leads me to believe that the station numbering is done by name. | station_lon = rg.variables['station_lon']
station_lat = rg.variables['station_lat']
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.scatter(station_lon[:], station_lat[:]) | data_exploration/python_data_handling.ipynb | slerch/ppnn | mit |
Wohooo, Germany!
1.3 Temperature forecasts and observations
Ok, so now let's explore the temperature data a little. | # Let's extract the actual data from the NetCDF array
# Then we can manipulate it later.
tfc = rg.variables['t2m_fc'][:]
tobs = rg.variables['t2m_obs'][:]
tobs[:5, :5].data | data_exploration/python_data_handling.ipynb | slerch/ppnn | mit |
So there are actually missing data in the observations. We will need to think about how to deal with those.
Sebastian mentioned that in the current version there is some Celcius/Kelvin inconsistencies. | plt.plot(np.mean(tfc, axis=(1, 2)))
# Since this will be fixed soon, let's just create a little ad hoc fix
idx = np.where(np.mean(tfc, axis=(1, 2)) > 100)[0][0]
tfc[idx:] = tfc[idx:] - 273.15
# Let's create a little function to visualize the ensemble forecast
# and the corresponding observation
def plot_fc_obs_hist(t, s):
plt.hist(tfc[t, :, s])
plt.axvline(tobs[t, s], c='r')
plt.title(num2date(time[t], units='seconds since 1970-01-01 00:00 UTC'))
plt.show()
# Now let's plot some forecast for random time steps and stations
plot_fc_obs_hist(0, 0)
plot_fc_obs_hist(100, 100)
plot_fc_obs_hist(1000, 200) | data_exploration/python_data_handling.ipynb | slerch/ppnn | mit |
2. Slicing the data
Now let's see how we can conveniently prepare the data in chunks for the post-processing purposes.
2.1 Monthly slices
The goal here is to pick all data points from a given month and also for a given time, so 00 or 12 UTC | # Let's write a handy function which returns the required data
# from the NetCDF object
def get_data_slice(rg, month, utc=0):
# Get array of datetime objects
dates = num2date(rg.variables['time'][:],
units='seconds since 1970-01-01 00:00 UTC')
# Extract months and hours
months = np.array([d.month for d in list(dates)])
hours = np.array([d.hour for d in list(dates)])
# for now I need to include the Kelvin fix
tfc = rg.variables['t2m_fc'][:]
idx = np.where(np.mean(tfc, axis=(1, 2)) > 100)[0][0]
tfc[idx:] = tfc[idx:] - 273.15
# Extract the requested data
tobs = rg.variables['t2m_obs'][(months == 1) & (hours == 0)]
tfc = tfc[(months == 1) & (hours == 0)]
return tobs, tfc
tobs_jan_00, tfc_jan_00 = get_data_slice(rg, 1, 0)
tfc_jan_00.shape | data_exploration/python_data_handling.ipynb | slerch/ppnn | mit |
3. Compute the parametric and sample CRPS for the raw ensemble data
3.1 CRPS for a normal distribution
From Gneiting et al. 2005, EMOS: | from scipy.stats import norm
def crps_normal(mu, sigma, y):
loc = (y - mu) / sigma
crps = sigma * (loc * (2 * norm.cdf(loc) - 1) +
2 * norm.pdf(loc) - 1. / np.sqrt(np.pi))
return crps
# Get ensmean and ensstd
tfc_jan_00_mean = np.mean(tfc_jan_00, axis=1)
tfc_jan_00_std = np.std(tfc_jan_00, axis=1, ddof=1)
# Compute CRPS using the ensemble mean and variance
crps_jan_00 = crps_normal(tfc_jan_00_mean, tfc_jan_00_std, tobs_jan_00)
# Wanrings probably doe to missing values
crps_jan_00.mean() | data_exploration/python_data_handling.ipynb | slerch/ppnn | mit |
Nice, this corresponds well to the value sebastian got for the raw ensemble in January.
3.2 Sample CRPS
For this we use the scoringRules package inside enstools. | import sys
sys.path.append('/Users/stephanrasp/repositories/enstools')
from enstools.scores.ScoringRules2Py.scoringtools import
??enstools.scores.crps_sample
tfc_jan_00.shape
tobs_jan_00.shape
tfc_jan_00_flat = np.rollaxis(tfc_jan_00, 1, 0)
tfc_jan_00_flat.shape
tfc_jan_00_flat = tfc_jan_00_flat.reshape(tfc_jan_00_flat.shape[0], -1)
tfc_jan_00_flat.shape
tobs_jan_00_flat = tobs_jan_00.ravel()
mask = tobs_jan_00_flat.mask
tobs_jan_00_flat_true = np.array(tobs_jan_00_flat)[~mask]
tfc_jan_00_flat_true = np.array(tfc_jan_00_flat)[:, ~mask]
np.isfinite(tobs_jan_00_flat_true)
tfc_jan_00_flat_true.shape
enstools.scores.crps_sample(tobs_jan_00_flat_true, tfc_jan_00_flat_true, mean=True) | data_exploration/python_data_handling.ipynb | slerch/ppnn | mit |
Configuration
This configuration determines whether functions print logs during the execution. | debugMode = True | applications/notebooks/laurens/comparisons.ipynb | phenology/infrastructure | apache-2.0 |
Connect to Spark
Here, the Spark context is loaded, which allows for a connection to HDFS. | appName = "plot_GeoTiff"
masterURL = "spark://emma0.emma.nlesc.nl:7077"
#A context needs to be created if it does not already exist
try:
sc.stop()
except NameError:
print("A new Spark Context will be created.")
sc = SparkContext(conf = SparkConf().setAppName(appName).setMaster(masterURL))
conf = sc.getConf() | applications/notebooks/laurens/comparisons.ipynb | phenology/infrastructure | apache-2.0 |
Subtitle | def getModeAsArray(filePath):
data = sc.binaryFiles(filePath).take(1)
byteArray = bytearray(data[0][1])
memfile = MemoryFile(byteArray)
dataset = memfile.open()
array = np.array(dataset.read()[0], dtype=np.float64)
memfile.close()
array = array.flatten()
array = array[~np.isnan(array)]
return array
def detemineNorm(array1, array2):
if array1.shape != array2.shape:
print("Error: shapes are not the same: (" + str(array1.shape) + " vs " + str(array2.shape) + ")")
return 0
value = scipy.linalg.norm(array1 - array2)
if value > 1:
value = scipy.linalg.norm(array1 + array2)
return value
textFile1 = sc.textFile("hdfs:///user/emma/svd/spark/BloomGridmetLeafGridmetCali3/U.csv").map(lambda line: (line.split(','))).map(lambda m: [ float(i) for i in m]).collect()
array1 = numpy.array(textFile1, dtype=np.float64)
vector11 = array1.T[0]
vector12 = array1.T[1]
vector13 = array1.T[2]
textFile2 = sc.textFile("hdfs:///user/emma/svd/BloomGridmetLeafGridmetCali/U.csv").map(lambda line: (line.split(','))).map(lambda m: [ np.float64(i) for i in m]).collect()
array2 = numpy.array(textFile2, dtype=np.float64).reshape(37,23926)
vector21 = array2[0]
vector22 = array2[1]
vector23 = array2[2]
array2.shape
print(detemineNorm(vector11, vector21))
print(detemineNorm(vector12, vector22))
print(detemineNorm(vector13, vector23))
array1 = getModeAsArray("hdfs:///user/emma/svd/spark/BloomGridmetLeafGridmetCali3/u_tiffs/svd_u_0_26.tif")
array2 = getModeAsArray("hdfs:///user/emma/svd/BloomGridmetLeafGridmetCali/ModeU01.tif")
detemineNorm(array1, array2)
print(detemineNorm(array1, vector11))
print(detemineNorm(array1, vector21))
print(detemineNorm(array2, vector11))
print(detemineNorm(array2, vector21))
~np.in1d(array1, vector21)
for i in range(10):
print("%.19f %0.19f %0.19f" % (array1[i], array2[i], (array1[i]+array2[i]))) | applications/notebooks/laurens/comparisons.ipynb | phenology/infrastructure | apache-2.0 |
BloomFinalLowPR and LeafFinalLowPR | array1 = getModeAsArray("hdfs:///user/emma/svd/BloomFinalLowPRLeafFinalLowPR/ModeU01.tif")
array2 = getModeAsArray("hdfs:///user/emma/svd/spark/BloomFinalLowPRLeafFinalLowPR3/u_tiffs/svd_u_0_3.tif")
detemineNorm(array1, array2)
array1 = getModeAsArray("hdfs:///user/emma/svd/BloomFinalLowPRLeafFinalLowPR/ModeU02.tif")
array2 = getModeAsArray("hdfs:///user/emma/svd/spark/BloomFinalLowPRLeafFinalLowPR3/u_tiffs/svd_u_1_3.tif")
detemineNorm(array1, array2)
array1 = getModeAsArray("hdfs:///user/emma/svd/BloomFinalLowPRLeafFinalLowPR/ModeU01.tif")
array2 = getModeAsArray("hdfs:///user/emma/svd/spark/BloomFinalLowPRLeafFinalLowPR3/u_tiffs/svd_u_0_3.tif")
detemineNorm(array1, array2) | applications/notebooks/laurens/comparisons.ipynb | phenology/infrastructure | apache-2.0 |
BloomGridmet and LeafGridmet | for i in range(37):
if (i < 9):
path1 = "hdfs:///user/emma/svd/BloomGridmetLeafGridmetCali/ModeU0"+ str(i+1) + ".tif"
else:
path1 = "hdfs:///user/emma/svd/BloomGridmetLeafGridmetCali/ModeU"+ str(i+1) + ".tif"
array1 = getModeAsArray(path1)
array2 = getModeAsArray("hdfs:///user/emma/svd/spark/BloomGridmetLeafGridmetCali3/u_tiffs/svd_u_" +str(i) +"_26.tif")
print(detemineNorm(array1, array2)) | applications/notebooks/laurens/comparisons.ipynb | phenology/infrastructure | apache-2.0 |
\begin{equation}
A = \left( \begin{array}{rrr}
0.01 & 0.0012 & 0.000 \
1.00 & 99.9000 & 0.010 \
1.20 & 999999.1230 & 0.001 \
\end{array} \right)
\end{equation} | writer = pytablewriter.LatexMatrixWriter()
writer.table_name = "B"
writer.value_matrix = [
["a_{11}", "a_{12}", "\\ldots", "a_{1n}"],
["a_{21}", "a_{22}", "\\ldots", "a_{2n}"],
[r"\vdots", "\\vdots", "\\ddots", "\\vdots"],
["a_{n1}", "a_{n2}", "\\ldots", "a_{nn}"],
]
writer.write_table() | test/data/pytablewriter_examples.ipynb | thombashi/sqlitebiter | mit |
\begin{equation}
B = \left( \begin{array}{llll}
a_{11} & a_{12} & \ldots & a_{1n} \
a_{21} & a_{22} & \ldots & a_{2n} \
\vdots & \vdots & \ddots & \vdots \
a_{n1} & a_{n2} & \ldots & a_{nn} \
\end{array} \right)
\end{equation} | writer = pytablewriter.LatexTableWriter()
writer.header_list = header_list
writer.value_matrix = data
writer.write_table() | test/data/pytablewriter_examples.ipynb | thombashi/sqlitebiter | mit |
\begin{array}{r | r | l | l | l | l} \hline
\verb|int| & \verb|float| & \verb|str| & \verb|bool| & \verb|mix| & \verb|time| \ \hline
\hline
0 & 0.10 & hoge & True & 0 & \verb|2017-01-01 03:04:05+0900| \ \hline
2 & -2.23 & foo & False & & \verb|2017-12-23 12:34:51+0900| \ \hline
3 & 0.00 & bar & True & \infty & \verb|2017-03-03 22:44:55+0900| \ \hline
-10 & -9.90 & & False & NaN & \verb|2017-01-01 00:00:00+0900| \ \hline
\end{array} | writer = pytablewriter.MarkdownTableWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.write_table()
writer = pytablewriter.MarkdownTableWriter()
writer.table_name = "write example with a margin"
writer.header_list = header_list
writer.value_matrix = data
writer.margin = 1 # add a whitespace for both sides of each cell
writer.write_table()
writer = pytablewriter.MediaWikiTableWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.write_table()
writer = pytablewriter.NumpyTableWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.write_table()
writer = pytablewriter.PandasDataFrameWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.write_table()
writer = pytablewriter.PandasDataFrameWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.is_datetime_instance_formatting = False
writer.write_table()
writer = pytablewriter.PythonCodeTableWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.write_table()
writer = pytablewriter.PythonCodeTableWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.is_datetime_instance_formatting = False
writer.write_table()
writer = pytablewriter.RstGridTableWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.write_table()
writer = pytablewriter.RstSimpleTableWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.write_table()
writer = pytablewriter.RstCsvTableWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.write_table()
writer = pytablewriter.LtsvTableWriter()
writer.header_list = header_list
writer.value_matrix = data
writer.write_table()
writer = pytablewriter.TomlTableWriter()
writer.table_name = table_name
writer.header_list = header_list
writer.value_matrix = data
writer.write_table()
from datetime import datetime
import pytablewriter as ptw
writer = ptw.JavaScriptTableWriter()
writer.header_list = ["header_a", "header_b", "header_c"]
writer.value_matrix = [
[-1.1, "2017-01-02 03:04:05", datetime(2017, 1, 2, 3, 4, 5)],
[0.12, "2017-02-03 04:05:06", datetime(2017, 2, 3, 4, 5, 6)],
]
print("// without type hints: column data types detected automatically by default")
writer.table_name = "without type hint"
writer.write_table()
print("// with type hints: Integer, DateTime, String")
writer.table_name = "with type hint"
writer.type_hint_list = [ptw.Integer, ptw.DateTime, ptw.String]
writer.write_table()
from datetime import datetime
import pytablewriter as ptw
writer = ptw.PythonCodeTableWriter()
writer.value_matrix = [
[-1.1, float("inf"), "2017-01-02 03:04:05", datetime(2017, 1, 2, 3, 4, 5)],
[0.12, float("nan"), "2017-02-03 04:05:06", datetime(2017, 2, 3, 4, 5, 6)],
]
# column data types detected automatically by default
writer.table_name = "python variable without type hints"
writer.header_list = ["float", "infnan", "string", "datetime"]
writer.write_table()
# set type hints
writer.table_name = "python variable with type hints"
writer.header_list = ["hint_int", "hint_str", "hint_datetime", "hint_str"]
writer.type_hint_list = [ptw.Integer, ptw.String, ptw.DateTime, ptw.String]
writer.write_table()
writer = pytablewriter.MarkdownTableWriter()
writer.from_csv(
dedent(
"""\
"i","f","c","if","ifc","bool","inf","nan","mix_num","time"
1,1.10,"aa",1.0,"1",True,Infinity,NaN,1,"2017-01-01 00:00:00+09:00"
2,2.20,"bbb",2.2,"2.2",False,Infinity,NaN,Infinity,"2017-01-02 03:04:05+09:00"
3,3.33,"cccc",-3.0,"ccc",True,Infinity,NaN,NaN,"2017-01-01 00:00:00+09:00"
"""
)
)
writer.write_table()
writer = pytablewriter.MarkdownTableWriter()
writer.table_name = "ps"
writer.from_csv(
dedent(
"""\
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.4 77664 8784 ? Ss May11 0:02 /sbin/init
root 2 0.0 0.0 0 0 ? S May11 0:00 [kthreadd]
root 4 0.0 0.0 0 0 ? I< May11 0:00 [kworker/0:0H]
root 6 0.0 0.0 0 0 ? I< May11 0:00 [mm_percpu_wq]
root 7 0.0 0.0 0 0 ? S May11 0:01 [ksoftirqd/0]
"""
),
delimiter=" ",
)
writer.write_table() | test/data/pytablewriter_examples.ipynb | thombashi/sqlitebiter | mit |
As you can see the function can also walk the class hierarchy, so the check is not so trivial like the one you would obtain by directly using type().
The isinstance() function, however, does not completely solve the problem. If we write a class that actually acts like a list but does not inherit from it, isinstance() does not recognize the fact that the two may be considered the same thing. The following code returns False regardless the content of the MyList class | class MyList:
pass
ml = MyList()
isinstance(ml, list) | notebooks/giordani/Python_3_OOP_Part_6__Abstract_Base_Classes.ipynb | Heroes-Academy/OOP_Spring_2016 | mit |
since isinstance() does not check the content of the class or its behaviour, it just consider the class and its ancestors.
The problem, thus, may be summed up with the following question: what is the best way to test that an object exposes a given interface? Here, the word interface is used for its natural meaning, without any reference to other programming solutions, which however address the same problem.
A good way to address the problem could be to write inside an attribute of the object the list of interfaces it promises to implement, and to agree that any time we want to test the behaviour of an object we simply have to check the content of this attribute. This is exactly the path followed by Python, and it is very important to understand that the whole system is just about a promised behaviour.
The solution proposed through PEP 3119 is, in my opinion, very simple and elegant, and it perfectly fits the nature of Python, where things are usually agreed rather than being enforced. Not only, the solution follows the spirit of polymorphism, where information is provided by the object itself and not extracted by the calling code.
In the next sections I am going to try and describe this solution in its main building blocks. The matter is complex so my explanation will lack some details: please refer to the forementioned PEP 3119 for a complete description.
Who Framed the Metaclasses
As already described, Python provides two built-ins to inspect objects and classes, which are isinstance() and issubclass() and it would be desirable that a solution to the inspection problem allows the programmer to go on with using those two functions.
This means that we need to find a way to inject the "behaviour promise" into both classes and instances. This is the reason why metaclasses come in play. Recall what we said about them in the fifth issue of this series: metaclasses are the classes used to build classes, which means that they are the preferred way to change the structure of a class, and, in consequence, of its instances.
Another way to do the same job would be to leverage the inheritance mechanism, injecting the behaviour through a dedicated parent class. This solution has many downsides, which I'm am not going to detail. It is enough to say that affecting the class hierarchy may lead to complex situations or subtle bugs. Metaclasses may provide here a different entry point for the introduction of a "virtual base class" (as PEP 3119 specifies, this is not the same concept as in C++).
Overriding Places
As said, isinstance() and issubclass() are built-in functions, not object methods, so we cannot simply override them providing a different implementation in a given class. So the first part of the solution is to change the behaviour of those two functions to first check if the class or the instance contain a special method, which is __instancecheck__() for isinstance() and __subclasscheck__() for issubclass(). So both built-ins try to run the respective special method, reverting to the standard algorithm if it is not present.
A note about naming. Methods must accept the object they belong to as the first argument, so the two special methods shall have the form
``` python
def instancecheck(cls, inst):
[...]
def subclasscheck(cls, sub):
[...]
```
where cls is the class where they are injected, that is the one representing the promised behaviour. The two built-ins, however, have a reversed argument order, where the behaviour comes after the tested object: when you write isinstance([], list) you want to check if the [] instance has the list behaviour. This is the reason behind the name choice: just calling the methods __isinstance__() and __issubclass__() and passing arguments in a reversed order would have been confusing.
This is ABC
The proposed solution is thus called Abstract Base Classes, as it provides a way to attach to a concrete class a virtual class with the only purpose of signaling a promised behaviour to anyone inspecting it with isinstance() or issubclass().
To help programmers implement Abstract Base Classes, the standard library has been given an abc module, thet contains the ABCMeta class (and other facilities). This class is the one that implements __instancecheck__() and __subclasscheck__() and shall be used as a metaclass to augment a standard class. This latter will then be able to register other classes as implementation of its behaviour.
Sounds complex? An example may clarify the whole matter. The one from the official documentation is rather simple: | from abc import ABCMeta
class MyABC(metaclass=ABCMeta):
pass
MyABC.register(tuple)
assert issubclass(tuple, MyABC)
assert isinstance((), MyABC) | notebooks/giordani/Python_3_OOP_Part_6__Abstract_Base_Classes.ipynb | Heroes-Academy/OOP_Spring_2016 | mit |
Here, the MyABC class is provided the ABCMeta metaclass. This puts the two __isinstancecheck__() and __subclasscheck__() methods inside MyABC so that, when issuing isinstance(), what Python actually ececutes is | d = {'a': 1}
isinstance(d, MyABC)
MyABC.__class__.__instancecheck__(MyABC, d)
isinstance((), MyABC)
MyABC.__class__.__instancecheck__(MyABC, ()) | notebooks/giordani/Python_3_OOP_Part_6__Abstract_Base_Classes.ipynb | Heroes-Academy/OOP_Spring_2016 | mit |
After the definition of MyABC we need a way to signal that a given class is an instance of the Abstract Base Class and this happens through the register() method, provided by the ABCMeta metaclass. Calling MyABC.register(tuple) we record inside MyABC the fact that the tuple class shall be identified as a subclass of MyABC itself. This is analogous to saying that tuple inherits from MyABC but not quite the same. As already said registering a class in an Abstract Base Class with register() does not affect the class hierarchy. Indeed, the whole tuple class is unchanged.
The current implementation of ABCs stores the registered types inside the _abc_registry attribute. Actually it stores there weak references to the registered types (this part is outside the scope of this article, so I'm not detailing it) | MyABC._abc_registry.data | notebooks/giordani/Python_3_OOP_Part_6__Abstract_Base_Classes.ipynb | Heroes-Academy/OOP_Spring_2016 | mit |
Sigmoid
The sigmoid function was once the default choice of activation function when building a network and to some extent it still is. By mapping values into a range between 0 and 1 it lacks the beneficial quality of being zero centered - a property that aids gradient descent during back propogation. | def activation_sigmoid(x, derivative):
sigmoid_value = 1/(1+np.exp(-x))
if not derivative:
return sigmoid_value
else:
return sigmoid_value*(1-sigmoid_value) | neural-networks/defining_activation_functions.ipynb | tpin3694/tpin3694.github.io | mit |
When plotted on a range of -5,5, this gives the following shape. | x_values = np.arange(-5, 6, 0.1)
y_sigmoid = activation_sigmoid(x_values, derivative=False)
plt.plot(x_values, y_sigmoid) | neural-networks/defining_activation_functions.ipynb | tpin3694/tpin3694.github.io | mit |
Tanh
tanh is very similar in shape to the sigmoid, however the defining difference is that tanh ranges from -1 to 1, making it zero centered and consequently a very popular choice. Conveniently, tanh is pre-defined in NumPy, however it is still worthwhile wrapping it up in a function in order to define the derivative of tanh. | def activation_tanh(x, derivative):
tanh_value = np.tanh(x)
if not derivative:
return tanh_value
else:
return 1-tanh_value**2
y_tanh = activation_tanh(x_values, derivative = False)
plt.plot(x_values, y_tanh) | neural-networks/defining_activation_functions.ipynb | tpin3694/tpin3694.github.io | mit |
ReLU
The Rectified Linear Unit is another commonly used activation function with a range from 0 to infinity. A major advantage of the ReLU function is that, unlike the sigmoid and tanh, the gradient of the ReLU function does not vanish as the limits are approached. An additionaly benefit of the ReLU is its enhanced computational efficiency as shown by Krizhevsky et. al. who found the ReLU function to be six times faster than tanh. | def relu_activation(x, derivative):
if not derivative:
return x * (x>0)
else:
x[x <= 0] = 0
x[x > 0] = 1
return x
y_relu = relu_activation(x_values, derivative=False)
plt.plot(x_values, y_relu) | neural-networks/defining_activation_functions.ipynb | tpin3694/tpin3694.github.io | mit |
It is probably worth noting, that the leaky ReLU is a closely related function with the only difference being that values < 0 are not completely set to 0, instead multiplied by 0.01.
Softmax
The final function to be discussed is the softmax, a function typically used in the final layer of a network. The softmax function reduces the value of each neurone in the final layer to a value in the range of 0 and 1, such that all values in the final layer sum to 1. The benefit of this is that in a multi-classification problem, the softmax function will assign a probability to each class, allowing for deeper insight into the performance of the network to be obtained through metrics such as top-n error. Note, the softmax will sometimes be written with the omission of the subtraction of np.max(x) stablises the function due to the exponent in the softmax sometimes resulting in a value larger than what Python can accept (10 followed by 138 0s) being calculated. | def softmax_activation(x):
exponent = np.exp(x - np.max(x))
softmax_value = exponent/np.sum(exponent, axis = 0)
return softmax_value
y_softmax = softmax_activation(x_values)
plt.plot(x_values, y_softmax)
print("The sum of all softmax probabilities can be confirmed as " + str(np.sum(y_softmax))) | neural-networks/defining_activation_functions.ipynb | tpin3694/tpin3694.github.io | mit |
Initializing Python | #!/usr/bin/env python
# -*- coding: UTF-8
# IMPORTING KEY PACKAGES
import csv # for reading in CSVs and turning them into dictionaries
import re # for regular expressions
import os # for navigating file trees
import nltk # for natural language processing tools
import pandas # for working with dataframes
import numpy as np # for working with numbers
# FOR CLEANING, TOKENIZING, AND STEMMING THE TEXT
from nltk import word_tokenize, sent_tokenize # widely used text tokenizer
from nltk.stem.porter import PorterStemmer # an approximate method of stemming words (it just cuts off the ends)
from nltk.corpus import stopwords # for one method of eliminating stop words, to clean the text
stopenglish = list(stopwords.words("english")) # assign the string of english stopwords to a variable and turn it into a list
import string # for one method of eliminating punctuation
punctuations = list(string.punctuation) # assign the string of common punctuation symbols to a variable and turn it into a list
# FOR ANALYZING WITH THE TEXT
from sklearn.feature_extraction.text import CountVectorizer # to work with document-term matrices, especially
countvec = CountVectorizer(tokenizer=nltk.word_tokenize)
from sklearn.feature_extraction.text import TfidfVectorizer # for creating TF-IDFs
tfidfvec = TfidfVectorizer()
from sklearn.decomposition import LatentDirichletAllocation # for topic modeling
import gensim # for word embedding models
from scipy.spatial.distance import cosine # for cosine similarity
from sklearn.metrics import pairwise # for pairwise similarity
from sklearn.manifold import MDS, TSNE # for multi-dimensional scaling
# FOR VISUALIZATIONS
import matplotlib
import matplotlib.pyplot as plt
# Visualization parameters
% pylab inline
% matplotlib inline
matplotlib.style.use('ggplot') | scripts/analysis_prelim.ipynb | jhaber-zz/Charter-school-identities | mit |
Reading in preliminary data | sample = [] # make empty list
with open('../data_URAP_etc/mission_data_prelim.csv', 'r', encoding = 'Latin-1')\
as csvfile: # open file
reader = csv.DictReader(csvfile) # create a reader
for row in reader: # loop through rows
sample.append(row) # append each row to the list
sample[0]
# Take a look at the most important contents and the variables list
# in our sample (a list of dictionaries)--let's look at just the first entry
print(sample[1]["SCHNAM"], "\n", sample[1]["URL"], "\n", sample[1]["WEBTEXT"], "\n")
print(sample[1].keys()) # look at all the variables!
# Read the data in as a pandas dataframe
df = pandas.read_csv("../data_URAP_etc/mission_data_prelim.csv", encoding = 'Latin-1')
df = df.dropna(subset=["WEBTEXT"]) # drop any schools with no webtext that might have snuck in (none currently)
# Add additional variables for analysis:
# PCTETH = percentage of enrolled students belonging to a racial minority
# this includes American Indian, Asian, Hispanic, Black, Hawaiian, or Pacific Islander
df["PCTETH"] = (df["AM"] + df["ASIAN"] + df["HISP"] + df["BLACK"] + df["PACIFIC"]) / df["MEMBER"]
df["STR"] = df["MEMBER"] / df["FTE"] # Student/teacher ratio
df["PCTFRPL"] = df["TOTFRL"] / df["MEMBER"] # Percent of students receiving FRPL
# Another interesting variable:
# TYPE = type of school, where 1 = regular, 2 = special ed, 3 = vocational, 4 = other/alternative, 5 = reportable program
## Print the webtext from the first school in the dataframe
print(df.iloc[0]["WEBTEXT"]) | scripts/analysis_prelim.ipynb | jhaber-zz/Charter-school-identities | mit |
Descriptive statistics
How urban proximity is coded: Lower number = more urban (closer to large city)
More specifically, it uses two digits with distinct meanings:
- the first digit:
- 1 = city
- 2 = suburb
- 3 = town
- 4 = rural
- the second digit:
- 1 = large or fringe
- 2 = mid-size or distant
- 3 = small/remote | print(df.describe()) # get descriptive statistics for all numerical columns
print()
print(df['ULOCAL'].value_counts()) # frequency counts for categorical data
print()
print(df['LEVEL'].value_counts()) # treat grade range served as categorical
# Codes for level/ grade range served: 3 = High school, 2 = Middle school, 1 = Elementary, 4 = Other)
print()
print(df['LSTATE'].mode()) # find the most common state represented in these data
print(df['ULOCAL'].mode()) # find the most urbanicity represented in these data
# print(df['FTE']).mean() # What's the average number of full-time employees by school?
# print(df['STR']).mean() # And the average student-teacher ratio?
# here's the number of schools from each state, in a graph:
grouped_state = df.groupby('LSTATE')
grouped_state['WEBTEXT'].count().sort_values(ascending=True).plot(kind = 'bar', title='Schools mostly in CA, TX, AZ, FL--similar to national trend')
plt.show()
# and here's the number of schools in each urban category, in a graph:
grouped_urban = df.groupby('ULOCAL')
grouped_urban['WEBTEXT'].count().sort_values(ascending=True).plot(kind = 'bar', title='Most schools are in large cities or large suburbs')
plt.show() | scripts/analysis_prelim.ipynb | jhaber-zz/Charter-school-identities | mit |
What these numbers say about the charter schools in the sample:
Most are located in large cities, followed by large suburbs, then medium and small city, and then rural.
The means for percent minorities and students receiving free- or reduced-price lunch are both about 60%.
Most are in CA, TX, AZ, and FL
Most of the schools in the sample are primary schools
This means that the sample reflects national averages. In that sense, this sample isn't so bad.
Cleaning, tokenizing, and stemming the text | # Now we clean the webtext by rendering each word lower-case then removing punctuation.
df['webtext_lc'] = df['WEBTEXT'].str.lower() # make the webtext lower case
df['webtokens'] = df['webtext_lc'].apply(nltk.word_tokenize) # tokenize the lower-case webtext by word
df['webtokens_nopunct'] = df['webtokens'].apply(lambda x: [word for word in x if word not in list(string.punctuation)]) # remove punctuation
print(df.iloc[0]["webtokens"]) # the tokenized text without punctuation
# Now we remove stopwords and stem. This will improve the results
df['webtokens_clean'] = df['webtokens_nopunct'].apply(lambda x: [word for word in x if word not in list(stopenglish)]) # remove stopwords
df['webtokens_stemmed'] = df['webtokens_clean'].apply(lambda x: [PorterStemmer().stem(word) for word in x])
# Some analyses require a string version of the webtext without punctuation or numbers.
# To get this, we join together the cleaned and stemmed tokens created above, and then remove numbers and punctuation:
df['webtext_stemmed'] = df['webtokens_stemmed'].apply(lambda x: ' '.join(char for char in x))
df['webtext_stemmed'] = df['webtext_stemmed'].apply(lambda x: ''.join(char for char in x if char not in punctuations))
df['webtext_stemmed'] = df['webtext_stemmed'].apply(lambda x: ''.join(char for char in x if not char.isdigit()))
df['webtext_stemmed'][0]
# Some analyses require tokenized sentences. I'll do this with the list of dictionaries.
# I'll use cleaned, tokenized sentences (with stopwords) to create both a dictionary variable and a separate list for word2vec
words_by_sentence = [] # initialize the list of tokenized sentences as an empty list
for school in sample:
school["sent_toksclean"] = []
school["sent_tokens"] = [word_tokenize(sentence) for sentence in sent_tokenize(school["WEBTEXT"])]
for sent in school["sent_tokens"]:
school["sent_toksclean"].append([PorterStemmer().stem(word.lower()) for word in sent if (word not in punctuations)]) # for each word: stem, lower-case, and remove punctuations
words_by_sentence.append([PorterStemmer().stem(word.lower()) for word in sent if (word not in punctuations)])
words_by_sentence[:2] | scripts/analysis_prelim.ipynb | jhaber-zz/Charter-school-identities | mit |
Counting document lengths | # We can also count document lengths. I'll mostly use the version with punctuation removed but including stopwords,
# because stopwords are also part of these schools' public image/ self-presentation to potential parents, regulators, etc.
df['webstem_count'] = df['webtokens_stemmed'].apply(len) # find word count without stopwords or punctuation
df['webpunct_count'] = df['webtokens_nopunct'].apply(len) # find length with stopwords still in there (but no punctuation)
df['webclean_count'] = df['webtokens_clean'].apply(len) # find word count without stopwords or punctuation
# For which urban status are website self-description the longest?
print(grouped_urban['webpunct_count'].mean().sort_values(ascending=False))
# here's the mean website self-description word count for schools grouped by urban proximity, in a graph:
grouped_urban['webpunct_count'].mean().sort_values(ascending=True).plot(kind = 'bar', title='Schools in mid-sized cities and suburbs have longer self-descriptions than in fringe areas', yerr = grouped_state["webpunct_count"].std())
plt.show()
# Look at 'FTE' (proxy for # administrators) clustered by urban proximity and whether it explains this
grouped_urban['FTE'].mean().sort_values(ascending=True).plot(kind = 'bar', title='Title', yerr = grouped_state["FTE"].std())
plt.show()
# Now let's calculate the type-token ratio (TTR) for each school, which compares
# the number of types (unique words used) with the number of words (including repetitions of words).
df['numtypes'] = df['webtokens_nopunct'].apply(lambda x: len(set(x))) # this is the number of unique words per site
df['TTR'] = df['numtypes'] / df['webpunct_count'] # calculate TTR
# here's the mean TTR for schools grouped by urban category:
grouped_urban = df.groupby('ULOCAL')
grouped_urban['TTR'].mean().sort_values(ascending=True).plot(kind = 'bar', title='Charters in cities and suburbs have higher textual redundancy than in fringe areas', yerr = grouped_urban["TTR"].std())
plt.show() | scripts/analysis_prelim.ipynb | jhaber-zz/Charter-school-identities | mit |
(Excessively) Frequent words | # First, aggregate all the cleaned webtext:
webtext_all = []
df['webtokens_clean'].apply(lambda x: [webtext_all.append(word) for word in x])
webtext_all[:20]
# Now apply the nltk function FreqDist to count the number of times each token occurs.
word_frequency = nltk.FreqDist(webtext_all)
#print out the 50 most frequent words using the function most_common
print(word_frequency.most_common(50)) | scripts/analysis_prelim.ipynb | jhaber-zz/Charter-school-identities | mit |
### These are prolific, ritual, empty words and will be excluded from topic models!
Distinctive words (mostly place names) | sklearn_dtm = countvec.fit_transform(df['webtext_stemmed'])
print(sklearn_dtm)
# What are some of the words in the DTM?
print(countvec.get_feature_names()[:10])
# now we can create the dtm, but with cells weigthed by the tf-idf score.
dtm_tfidf_df = pandas.DataFrame(tfidfvec.fit_transform(df.webtext_stemmed).toarray(), columns=tfidfvec.get_feature_names(), index = df.index)
dtm_tfidf_df[:20] # let's take a look!
# What are the 20 words with the highest TF-IDF scores?
print(dtm_tfidf_df.max().sort_values(ascending=False)[:20]) | scripts/analysis_prelim.ipynb | jhaber-zz/Charter-school-identities | mit |
Like the frequent words above, these highly "unique" words are empty of meaning and will be excluded from topic models!
Word Embeddings with word2vec
Word2Vec features
<ul>
<li>Size: Number of dimensions for word embedding model</li>
<li>Window: Number of context words to observe in each direction</li>
<li>min_count: Minimum frequency for words included in model</li>
<li>sg (Skip-Gram): '0' indicates CBOW model; '1' indicates Skip-Gram</li>
<li>Alpha: Learning rate (initial); prevents model from over-correcting, enables finer tuning</li>
<li>Iterations: Number of passes through dataset</li>
<li>Batch Size: Number of words to sample from data during each pass</li>
<li>Worker: Set the 'worker' option to ensure reproducibility</li>
</ul> | # train the model, using a minimum of 5 words
model = gensim.models.Word2Vec(words_by_sentence, size=100, window=5, \
min_count=2, sg=1, alpha=0.025, iter=5, batch_words=10000, workers=1)
# dictionary of words in model (may not work for old gensim)
# print(len(model.vocab))
# model.vocab
# Find cosine distance between two given word vectors
print(model.similarity('college-prep','align')) # these two are close to essentialism
print(model.similarity('emot', 'curios')) # these two are close to progressivism
# create some rough dictionaries for our contrasting educational philosophies
essentialism = ['excel', 'perform', 'prep', 'rigor', 'standard', 'align', 'comprehens', 'content', \
'data-driven', 'market', 'research', 'research-bas', 'program', 'standards-bas']
progressivism = ['inquir', 'curios', 'project', 'teamwork', 'social', 'emot', 'reflect', 'creat',\
'ethic', 'independ', 'discov', 'deep', 'problem-solv', 'natur']
# Let's look at two vectors that demonstrate the binary between these philosophies: align and emot
print(model.most_similar('align')) # words core to essentialism
print()
print(model.most_similar('emot')) # words core to progressivism
print(model.most_similar('emot')) # words core to progressivism
# Let's work with the binary between progressivism vs. essentialism
# first let's find the 50 words closest to each philosophy using the two 14-term dictionaries defined above
prog_words = model.most_similar(progressivism, topn=50)
prog_words = [word for word, similarity in prog_words]
for word in progressivism:
prog_words.append(word)
print(prog_words[:20])
ess_words = model.most_similar(essentialism, topn=50) # now let's get the 50 most similar words for our essentialist dictionary
ess_words = [word for word, similarity in ess_words]
for word in essentialism:
ess_words.append(word)
print(ess_words[:20])
# construct an combined dictionary
phil_words = ess_words + prog_words
# preparing for visualizing this binary with word2vec
x = [model.similarity('emot', word) for word in phil_words]
y = [model.similarity('align', word) for word in phil_words]
# here's a visual of the progressivism/essentialism binary:
# top-left half is essentialism, bottom-right half is progressivism
_, ax = plt.subplots(figsize=(20,20))
ax.scatter(x, y, alpha=1, color='b')
for i in range(len(phil_words)):
ax.annotate(phil_words[i], (x[i], y[i]))
ax.set_xlim(.635, 1.005)
ax.set_ylim(.635, 1.005)
plt.plot([0, 1], [0, 1], linestyle='--'); | scripts/analysis_prelim.ipynb | jhaber-zz/Charter-school-identities | mit |
Binary of essentialist (top-left) and progressivist (bottom-right) word vectors
Topic Modeling with scikit-learn
For documentation on this topic modeling (TM) package, which uses Latent Dirichlet Allocation (LDA), see here.
And for documentation on the vectorizer package, CountVectorizer from scikit-learn, see here. | ####Adopted From:
#Author: Olivier Grisel <[email protected]>
# Lars Buitinck
# Chyi-Kwei Yau <[email protected]>
# License: BSD 3 clause
# Initialize the variables needed for the topic models
n_samples = 2000
n_topics = 3
n_top_words = 50
# Create helper function that prints out the top words for each topic in a pretty way
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
print("\nTopic #%d:" % topic_idx)
print(" ".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
print()
# Vectorize our text using CountVectorizer
print("Extracting tf features for LDA...")
tf_vectorizer = CountVectorizer(max_df=70, min_df=4,
max_features=None,
stop_words=stopenglish, lowercase=1
)
tf = tf_vectorizer.fit_transform(df.WEBTEXT)
print("Fitting LDA models with tf features, "
"n_samples=%d and n_topics=%d..."
% (n_samples, n_topics))
# define the lda function, with desired options
lda = LatentDirichletAllocation(n_topics=n_topics, max_iter=20,
learning_method='online',
learning_offset=80.,
total_samples=n_samples,
random_state=0)
#fit the model
lda.fit(tf)
# print the top words per topic, using the function defined above.
print("\nTopics in LDA model:")
tf_feature_names = tf_vectorizer.get_feature_names()
print_top_words(lda, tf_feature_names, n_top_words) | scripts/analysis_prelim.ipynb | jhaber-zz/Charter-school-identities | mit |
These topics seem to mean:
- topic 0 relates to GOALS,
- topic 1 relates to CURRICULUM, and
- topic 2 relates to PHILOSOPHY or learning process (but this topic less clear/ more mottled) | # Preparation for looking at distribution of topics over schools
topic_dist = lda.transform(tf) # transpose topic distribution
topic_dist_df = pandas.DataFrame(topic_dist) # turn into a df
df_w_topics = topic_dist_df.join(df) # merge with charter MS dataframe
df_w_topics[:20] # check out the merged df with topics!
topic_columns = range(0,n_topics) # Set numerical range of topic columns for use in analyses, using n_topics from above
# Which schools are weighted highest for topic 0? How do they trend with regard to urban proximity and student class?
print(df_w_topics[['LSTATE', 'ULOCAL', 'PCTETH', 'PCTFRPL', 0, 1, 2]].sort_values(by=[0], ascending=False))
# Preparation for comparing total number of words aligned with each topic
# To weight each topic by its prevalenced in the corpus, multiply each topic by the word count from above
col_list = []
for num in topic_columns:
col = "%d_wc" % num
col_list.append(col)
df_w_topics[col] = df_w_topics[num] * df_w_topics['webpunct_count']
df_w_topics[:20]
# Now we can see the prevalence of each topic over words for each urban category and state
grouped_urban = df_w_topics.groupby('ULOCAL')
for e in col_list:
print(e)
print(grouped_urban[e].sum()/grouped_urban['webpunct_count'].sum())
grouped_state = df_w_topics.groupby('LSTATE')
for e in col_list:
print(e)
print(grouped_state[e].sum()/grouped_state['webpunct_count'].sum())
# Here's the distribution of urban proximity over the three topics:
fig1 = plt.figure()
chrt = 0
for num in topic_columns:
chrt += 1
ax = fig1.add_subplot(2,3, chrt)
grouped_urban[num].mean().plot(kind = 'bar', yerr = grouped_urban[num].std(), ylim=0, ax=ax, title=num)
fig1.tight_layout()
plt.show()
# Here's the distribution of each topic over words, for each urban category:
fig2 = plt.figure()
chrt = 0
for e in col_list:
chrt += 1
ax2 = fig2.add_subplot(2,3, chrt)
(grouped_urban[e].sum()/grouped_urban['webpunct_count'].sum()).plot(kind = 'bar', ylim=0, ax=ax2, title=e)
fig2.tight_layout()
plt.show() | scripts/analysis_prelim.ipynb | jhaber-zz/Charter-school-identities | mit |
Import data | features = pd.read_csv('train_values.csv')
labels = pd.read_csv('train_labels.csv')
xlab = 'serum_cholesterol_mg_per_dl'
ylab = 'resting_blood_pressure'
print(labels.head())
features.head()
cluster_arr = np.array(features[[xlab,ylab]]).reshape(-1,2)
cluster_arr[:5] | kaggle/machine-learning-with-a-heart/Lab5.ipynb | xR86/ml-stuff | mit |
Cluster subsample visualization | x = features['serum_cholesterol_mg_per_dl']
y = features['resting_blood_pressure']
trace = [go.Scatter(
x = x,
y = y,
name = 'data',
mode = 'markers',
hoverinfo = 'text',
text = ['x: %s<br>y: %s' % (x_i, y_i) for x_i, y_i in zip(x, y)]
)]
layout = go.Layout(
xaxis = dict({'title': xlab}),
yaxis = dict({'title': ylab})
)
fig = go.Figure(data=trace, layout=layout)
iplot(fig, layout) | kaggle/machine-learning-with-a-heart/Lab5.ipynb | xR86/ml-stuff | mit |
Hierarchical Clustering
https://scikit-learn.org/stable/modules/clustering.html
https://scikit-learn.org/stable/modules/classes.html#module-sklearn.cluster
https://stackabuse.com/hierarchical-clustering-with-python-and-scikit-learn/ | from scipy.cluster.hierarchy import dendrogram, linkage | kaggle/machine-learning-with-a-heart/Lab5.ipynb | xR86/ml-stuff | mit |
Single Link | plt.figure(figsize=(15, 7))
linked = linkage(cluster_arr, 'single')
# labelList = range(1, 11)
dendrogram(linked,
orientation='top',
# labels=labelList,
distance_sort='descending',
show_leaf_counts=True)
plt.show() | kaggle/machine-learning-with-a-heart/Lab5.ipynb | xR86/ml-stuff | mit |
Complete Link | plt.figure(figsize=(15, 7))
linked = linkage(cluster_arr, 'complete')
# labelList = range(1, 11)
dendrogram(linked,
orientation='top',
# labels=labelList,
distance_sort='descending',
show_leaf_counts=True)
plt.show() | kaggle/machine-learning-with-a-heart/Lab5.ipynb | xR86/ml-stuff | mit |
Average Link | plt.figure(figsize=(15, 7))
linked = linkage(cluster_arr, 'average')
# labelList = range(1, 11)
dendrogram(linked,
orientation='top',
# labels=labelList,
distance_sort='descending',
show_leaf_counts=True)
plt.show() | kaggle/machine-learning-with-a-heart/Lab5.ipynb | xR86/ml-stuff | mit |
Ward Variance | plt.figure(figsize=(15, 7))
linked = linkage(cluster_arr, 'ward')
# labelList = range(1, 11)
dendrogram(linked,
orientation='top',
# labels=labelList,
distance_sort='descending',
show_leaf_counts=True)
plt.show() | kaggle/machine-learning-with-a-heart/Lab5.ipynb | xR86/ml-stuff | mit |
Density-based clustering
DBSCAN | from sklearn.cluster import DBSCAN
clustering = DBSCAN(eps=3, min_samples=2).fit(cluster_arr)
clustering
y_pred = clustering.labels_
y_pred
x = cluster_arr[:, 0]
y = cluster_arr[:, 1]
# col = ['#F33' if i == 1 else '#33F' for i in y_pred]
trace = [go.Scatter(
x = x,
y = y,
marker = dict(
# color = col,
color = y_pred,
colorscale='MAGMA',
colorbar=dict(
title='Labels'
),
),
name = 'data',
mode = 'markers',
hoverinfo = 'text',
text = ['x: %s<br>y: %s' % (x_i, y_i) for x_i, y_i in zip(x, y)]
)]
layout = go.Layout(
xaxis = dict({'title': xlab}),
yaxis = dict({'title': ylab})
)
fig = go.Figure(data=trace, layout=layout)
iplot(fig, layout) | kaggle/machine-learning-with-a-heart/Lab5.ipynb | xR86/ml-stuff | mit |
Other based on DBSCAN
K-Means | from sklearn.cluster import KMeans
y_pred = KMeans(n_clusters=2, random_state=random_state).fit_predict(cluster_arr)
y_pred
x = cluster_arr[:, 0]
y = cluster_arr[:, 1]
# col = ['#F33' if i == 1 else '#33F' for i in y_pred]
trace = [go.Scatter(
x = x,
y = y,
marker = dict(
# color = col,
color = y_pred,
colorscale='YlOrRd',
colorbar=dict(
title='Labels'
),
),
name = 'data',
mode = 'markers',
hoverinfo = 'text',
text = ['x: %s<br>y: %s' % (x_i, y_i) for x_i, y_i in zip(x, y)]
)]
layout = go.Layout(
xaxis = dict({'title': xlab}),
yaxis = dict({'title': ylab})
)
fig = go.Figure(data=trace, layout=layout)
iplot(fig, layout)
Ks = range(2, 11)
km = [KMeans(n_clusters=i) for i in Ks] # , verbose=True
# score = [km[i].fit(cluster_arr).score(cluster_arr) for i in range(len(km))]
fitted = [km[i].fit(cluster_arr) for i in range(len(km))]
score = [fitted[i].score(cluster_arr) for i in range(len(km))]
inertia = [fitted[i].inertia_ for i in range(len(km))]
relative_diff = [inertia[0]]
relative_diff.extend([inertia[i-1] - inertia[i] for i in range(1, len(inertia))])
print(fitted[:1])
print(score[:1])
print(inertia[:1])
print(relative_diff)
fitted[0]
dir(fitted[0])[:5]
data = [
# go.Bar(
# x = list(Ks),
# y = score
# ),
go.Bar(
x = list(Ks),
y = inertia,
text = ['Diff is: %s' % diff for diff in relative_diff]
),
go.Scatter(
x = list(Ks),
y = inertia
),
]
layout = go.Layout(
xaxis = dict(
title = 'No of Clusters [%s-%s]' % (min(Ks), max(Ks))
),
yaxis = dict(
title = 'Sklearn score / inertia'
),
# barmode='stack'
)
fig = go.Figure(data=data, layout=layout)
iplot(fig)
data = [
go.Bar(
x = list(Ks),
y = relative_diff
),
go.Scatter(
x = list(Ks),
y = relative_diff
),
]
layout = go.Layout(
xaxis = dict(
title = 'No of Clusters [%s-%s]' % (min(Ks), max(Ks))
),
yaxis = dict(
title = 'Pairwise difference'
),
# barmode='stack'
)
fig = go.Figure(data=data, layout=layout)
iplot(fig)
nb_end = dt.now()
'Time elapsed: %s' % (nb_end - nb_start) | kaggle/machine-learning-with-a-heart/Lab5.ipynb | xR86/ml-stuff | mit |
Dates
For both filtering and output, it is often necessary to parse and/or normalize the created_at date. The following shows the original created_at date and the date as an ISO 8601 date. | !head -n5 tweets.json | jq -c '[.created_at, .created_at | strptime("%A %B %d %T %z %Y") | todate]' | 20161122-twitter-jq-recipes/twitter_jq_recipes.ipynb | gwu-libraries/notebooks | mit |
Filtering
Filtering text
Case sensitive | !cat tweets.json | jq -c 'select(.text | contains("blog")) | [.id_str, .text]'
!cat tweets.json | jq -c 'select(.text | contains("BLOG")) | [.id_str, .text]' | 20161122-twitter-jq-recipes/twitter_jq_recipes.ipynb | gwu-libraries/notebooks | mit |
Case insensitive
To ignore case, use a regular expression filter with the case-insensitive flag. | !cat tweets.json | jq -c 'select(.text | test("BLog"; "i")) | [.id_str, .text]' | 20161122-twitter-jq-recipes/twitter_jq_recipes.ipynb | gwu-libraries/notebooks | mit |
Filtering on multiple terms (OR) | !cat tweets.json | jq -c 'select(.text | test("BLog|twarc"; "i")) | [.id_str, .text]' | 20161122-twitter-jq-recipes/twitter_jq_recipes.ipynb | gwu-libraries/notebooks | mit |
Filtering on multiple terms (AND) | !cat tweets.json | jq -c 'select((.text | test("BLog"; "i")) and (.text | test("twitter"; "i"))) | [.id_str, .text]' | 20161122-twitter-jq-recipes/twitter_jq_recipes.ipynb | gwu-libraries/notebooks | mit |
Filter dates
The following shows tweets created after November 5, 2016. | !cat tweets.json | jq -c 'select((.created_at | strptime("%A %B %d %T %z %Y") | mktime) > ("2016-11-05T00:00:00Z" | fromdateiso8601)) | [.id_str, .created_at, (.created_at | strptime("%A %B %d %T %z %Y") | todate)]' | 20161122-twitter-jq-recipes/twitter_jq_recipes.ipynb | gwu-libraries/notebooks | mit |
Is retweet | !cat tweets.json | jq -c 'select(has("retweeted_status")) | [.id_str, .retweeted_status.id]' | 20161122-twitter-jq-recipes/twitter_jq_recipes.ipynb | gwu-libraries/notebooks | mit |
Is quote | !cat tweets.json | jq -c 'select(has("quoted_status")) | [.id_str, .quoted_status.id]' | 20161122-twitter-jq-recipes/twitter_jq_recipes.ipynb | gwu-libraries/notebooks | mit |
Output
To write output to a file use > <filename>. For example: cat tweets.json | jq -r '.id_str' > tweet_ids.txt
CSV
Following is a CSV output that has fields similar to the CSV output produced by SFM's export functionality.
Note that is uses the -r flag for jq instead of the -c flag.
Also note that is it is necessary to remove line breaks from the tweet text to prevent it from breaking the CSV. This is done with (.text | gsub("\n";" ")). | !head -n5 tweets.json | jq -r '[(.created_at | strptime("%A %B %d %T %z %Y") | todate), .id_str, .user.screen_name, .user.followers_count, .user.friends_count, .retweet_count, .favorite_count, .in_reply_to_screen_name, "http://twitter.com/" + .user.screen_name + "/status/" + .id_str, (.text | gsub("\n";" ")), has("retweeted_status"), has("quoted_status")] | @csv' | 20161122-twitter-jq-recipes/twitter_jq_recipes.ipynb | gwu-libraries/notebooks | mit |
Header row
The header row should be written to the output file with > before appending the CSV with >>. | !echo "[]" | jq -r '["created_at","twitter_id","screen_name","followers_count","friends_count","retweet_count","favorite_count","in_reply_to_screen_name","twitter_url","text","is_retweet","is_quote"] | @csv' | 20161122-twitter-jq-recipes/twitter_jq_recipes.ipynb | gwu-libraries/notebooks | mit |
Splitting files
Excel can load CSV files with over a million rows. Howver, for practical purposes a much smaller number is recommended.
The following uses the split command to split the CSV output into multiple files. Note that the flags accepted may be different in your environment.
cat tweets.json | jq -r '[.id_str, (.text | gsub("\n";" "))] | @csv' | split --lines=5 -d --additional-suffix=.csv - tweets
ls *.csv
tweets00.csv tweets01.csv tweets02.csv tweets03.csv tweets04.csv
tweets05.csv tweets06.csv tweets07.csv tweets08.csv tweets09.csv
--lines=5 sets the number of lines to include in each file.
--additional-suffix=.csv set the file extension.
tweets is the base name for each file.
Tweet ids
When outputting tweet ids, .id_str should be used instead of .id. See Ed Summer's blog post for an explanation. | !head -n5 tweets.json | jq -r '.id_str' | 20161122-twitter-jq-recipes/twitter_jq_recipes.ipynb | gwu-libraries/notebooks | mit |
HDMI Frontend
The HDMI frontend modules wrap all of the clock and timing logic. The HDMI input frontend can be used independently from the rest of the pipeline by accessing its driver from the base overlay. | hdmiin_frontend = base.video.hdmi_in.frontend | boards/Pynq-Z1/base/notebooks/video/hdmi_video_pipeline.ipynb | cathalmccabe/PYNQ | bsd-3-clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.