hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
sequence | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
sequence | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
sequence | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
sequence | cell_types
sequence | cell_type_groups
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d0d670c1f3143cccac66726f06d3814d37402daa | 14,876 | ipynb | Jupyter Notebook | homework03/homework03_part3_gan_basic.ipynb | Tobichimaru/Practical_DL | 1913559302b8333829c145408a98a99e09405401 | [
"MIT"
] | 1 | 2020-09-18T11:58:08.000Z | 2020-09-18T11:58:08.000Z | homework03/homework03_part3_gan_basic.ipynb | Tobichimaru/Practical_DL | 1913559302b8333829c145408a98a99e09405401 | [
"MIT"
] | 8 | 2019-05-17T09:49:41.000Z | 2021-05-26T12:03:11.000Z | homework03/homework03_part3_gan_basic.ipynb | Tobichimaru/Practical_DL | 1913559302b8333829c145408a98a99e09405401 | [
"MIT"
] | 3 | 2018-09-21T13:52:19.000Z | 2018-09-24T11:30:15.000Z | 31.252101 | 692 | 0.515864 | [
[
[
"The visualization used for this homework is based on Alexandr Verinov's code. ",
"_____no_output_____"
],
[
"# Generative models",
"_____no_output_____"
],
[
"In this homework we will try several criterions for learning an implicit model. Almost everything is written for you, and you only need to implement the objective for the game and play around with the model. \n\n**0)** Read the code\n\n**1)** Implement objective for a vanilla [Generative Adversarial Networks](https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf) (GAN). The hyperparameters are already set in the code. The model will converge if you implement the objective (1) right. \n\n**2)** Note the discussion in the paper, that the objective for $G$ can be of two kinds: $min_G log(1 - D)$ and $min_G - log(D)$. Implement the second objective and ensure model converges. Most likely, in this example you will not notice the difference, but people usually use the second objective, it really matters in more complicated scenarios.\n\n**3 & 4)** Implement [Wasserstein GAN](https://arxiv.org/abs/1701.07875) ([WGAN](https://arxiv.org/abs/1704.00028)) and WGAN-GP. To make the discriminator have Lipschitz property you need to clip discriminator's weights to $[-0.01, 0.01]$ range (WGAN) or use gradient penalty (WGAN-GP). You will need to make few modifications to the code: 1) remove sigmoids from discriminator 2) add weight clipping clipping / gradient penaly. 3) change objective. See [implementation 1](https://github.com/martinarjovsky/WassersteinGAN/) / [implementation 2](https://github.com/caogang/wgan-gp). They also use different optimizer. The default hyperparameters may not work, spend time to tune them.\n\n**5) Bonus: same thing without GANs** Implement maximum mean discrepancy estimator (MMD). MMD is discrepancy measure between distributions. In our case we use it to calculate discrepancy between real and fake data. You need to implement RBF kernel $k(x,x')=\\exp \\left(-{\\frac {1}{2\\sigma ^{2}}}||x-x'||^{2}\\right)$ and an MMD estimator (see eq.8 from https://arxiv.org/pdf/1505.03906.pdf). MMD is then used instead of discriminator.",
"_____no_output_____"
]
],
[
[
"\"\"\" \n Please, implement everything in one notebook, using if statements to switch between the tasks\n\"\"\"\nTASK = 1 # 2, 3, 4, 5",
"_____no_output_____"
]
],
[
[
"# Imports",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport time\nfrom torch.autograd import Variable\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nimport torch\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nnp.random.seed(12345)\nlims=(-5, 5)",
"_____no_output_____"
]
],
[
[
"# Define sampler from real data and Z ",
"_____no_output_____"
]
],
[
[
"from scipy.stats import rv_discrete\n\nMEANS = np.array(\n [[-1,-3],\n [1,3],\n [-2,0],\n ])\nCOVS = np.array(\n [[[1,0.8],[0.8,1]],\n [[1,-0.5],[-0.5,1]],\n [[1,0],[0,1]],\n ])\nPROBS = np.array([\n 0.2,\n 0.5,\n 0.3\n ])\nassert len(MEANS) == len(COVS) == len(PROBS), \"number of components mismatch\"\nCOMPONENTS = len(MEANS)\n\ncomps_dist = rv_discrete(values=(range(COMPONENTS), PROBS))\n\ndef sample_true(N):\n comps = comps_dist.rvs(size=N)\n conds = np.arange(COMPONENTS)[:,None] == comps[None,:]\n arr = np.array([np.random.multivariate_normal(MEANS[c], COVS[c], size=N)\n for c in range(COMPONENTS)])\n return np.select(conds[:,:,None], arr).astype(np.float32)\n\nNOISE_DIM = 20\ndef sample_noise(N):\n return np.random.normal(size=(N,NOISE_DIM)).astype(np.float32)",
"_____no_output_____"
]
],
[
[
"# Visualization functions",
"_____no_output_____"
]
],
[
[
"def vis_data(data):\n \"\"\"\n Visualizes data as histogram\n \"\"\"\n hist = np.histogram2d(data[:, 1], data[:, 0], bins=100, range=[lims, lims])\n plt.pcolormesh(hist[1], hist[2], hist[0], alpha=0.5)\n\nfixed_noise = sample_noise(1000)\ndef vis_g():\n \"\"\"\n Visualizes generator's samples as circles\n \"\"\"\n data = generator(Variable(torch.Tensor(fixed_noise))).data.numpy()\n if np.isnan(data).any():\n return\n \n plt.scatter(data[:,0], data[:,1], alpha=0.2, c='b')\n plt.xlim(lims)\n plt.ylim(lims)\n \ndef vis_d():\n \"\"\"\n Visualizes discriminator's gradient on grid\n \"\"\"\n X, Y = np.meshgrid(np.linspace(lims[0], lims[1], 30), np.linspace(lims[0], lims[1], 30))\n X = X.flatten()\n Y = Y.flatten()\n grid = Variable(torch.Tensor(np.vstack([X, Y]).T), requires_grad=True)\n data_gen = generator(Variable(torch.Tensor(fixed_noise)))\n loss = d_loss(discriminator(data_gen), discriminator(grid))\n loss.backward()\n grads = - grid.grad.data.numpy()\n plt.quiver(X, Y, grads[:, 0], grads[:, 1], color='black',alpha=0.9)",
"_____no_output_____"
]
],
[
[
"# Define architectures",
"_____no_output_____"
],
[
"After you've passed task 1 you can play with architectures.",
"_____no_output_____"
],
[
"#### Generator",
"_____no_output_____"
]
],
[
[
"class Generator(nn.Module):\n def __init__(self, noise_dim, out_dim, hidden_dim=100):\n super(Generator, self).__init__()\n \n self.fc1 = nn.Linear(noise_dim, hidden_dim)\n nn.init.xavier_normal(self.fc1.weight)\n nn.init.constant(self.fc1.bias, 0.0)\n \n self.fc2 = nn.Linear(hidden_dim, hidden_dim)\n nn.init.xavier_normal(self.fc2.weight)\n nn.init.constant(self.fc2.bias, 0.0)\n \n self.fc3 = nn.Linear(hidden_dim, out_dim)\n nn.init.xavier_normal(self.fc3.weight)\n nn.init.constant(self.fc3.bias, 0.0)\n\n def forward(self, z):\n \"\"\"\n Generator takes a vector of noise and produces sample\n \"\"\"\n h1 = F.tanh(self.fc1(z))\n h2 = F.leaky_relu(self.fc2(h1))\n y_gen = self.fc3(h2)\n return y_gen",
"_____no_output_____"
]
],
[
[
"#### Discriminator",
"_____no_output_____"
]
],
[
[
"class Discriminator(nn.Module):\n def __init__(self, in_dim, hidden_dim=100):\n super(Discriminator, self).__init__()\n \n self.fc1 = nn.Linear(in_dim, hidden_dim)\n nn.init.xavier_normal(self.fc1.weight)\n nn.init.constant(self.fc1.bias, 0.0)\n \n self.fc2 = nn.Linear(hidden_dim, hidden_dim)\n nn.init.xavier_normal(self.fc2.weight)\n nn.init.constant(self.fc2.bias, 0.0)\n \n self.fc3 = nn.Linear(hidden_dim, hidden_dim)\n nn.init.xavier_normal(self.fc3.weight)\n nn.init.constant(self.fc3.bias, 0.0)\n \n self.fc4 = nn.Linear(hidden_dim, 1)\n nn.init.xavier_normal(self.fc4.weight)\n nn.init.constant(self.fc4.bias, 0.0)\n\n def forward(self, x):\n h1 = F.tanh(self.fc1(x))\n h2 = F.leaky_relu(self.fc2(h1))\n h3 = F.leaky_relu(self.fc3(h2))\n score = F.sigmoid(self.fc4(h3))\n return score",
"_____no_output_____"
]
],
[
[
"# Define updates and losses",
"_____no_output_____"
]
],
[
[
"generator = Generator(NOISE_DIM, out_dim = 2)\ndiscriminator = Discriminator(in_dim = 2)\n\nlr = 0.001\n\ng_optimizer = optim.Adam(generator.parameters(), lr=lr, betas=(0.5, 0.999))\nd_optimizer = optim.Adam(discriminator.parameters(), lr=lr, betas=(0.5, 0.999))",
"_____no_output_____"
]
],
[
[
"Notice we are using ADAM optimizer with `beta1=0.5` for both discriminator and discriminator. This is a common practice and works well. Motivation: models should be flexible and adapt itself rapidly to the distributions. \n\nYou can try different optimizers and parameters.",
"_____no_output_____"
]
],
[
[
"################################\n# IMPLEMENT HERE\n# Define the g_loss and d_loss here\n# these are the only lines of code you need to change to implement GAN game\n\ndef g_loss():\n # if TASK == 1: \n # do something\n \n return # TODO\ndef d_loss():\n # if TASK == 1: \n # do something\n\n return # TODO\n################################",
"_____no_output_____"
]
],
[
[
"# Get real data",
"_____no_output_____"
]
],
[
[
"data = sample_true(100000)\ndef iterate_minibatches(X, batchsize, y=None):\n perm = np.random.permutation(X.shape[0])\n \n for start in range(0, X.shape[0], batchsize):\n end = min(start + batchsize, X.shape[0])\n if y is None:\n yield X[perm[start:end]]\n else:\n yield X[perm[start:end]], y[perm[start:end]]",
"_____no_output_____"
],
[
"plt.rcParams['figure.figsize'] = (12, 12)\nvis_data(data)\nvis_g()\nvis_d()",
"_____no_output_____"
]
],
[
[
"**Legend**:\n- Blue dots are generated samples. \n- Colored histogram at the back shows density of real data. \n- And with arrows we show gradients of the discriminator -- they are the directions that discriminator pushes generator's samples. ",
"_____no_output_____"
],
[
"# Train the model",
"_____no_output_____"
]
],
[
[
"from IPython import display\n\nplt.xlim(lims)\nplt.ylim(lims)\n\nnum_epochs = 100\nbatch_size = 64\n\n# ===========================\n# IMPORTANT PARAMETER:\n# Number of D updates per G update\n# ===========================\nk_d, k_g = 4, 1\n\naccs = []\n\ntry:\n for epoch in range(num_epochs):\n for input_data in iterate_minibatches(data, batch_size):\n \n # Optimize D\n for _ in range(k_d):\n # Sample noise\n noise = Variable(torch.Tensor(sample_noise(len(input_data))))\n \n # Do an update\n inp_data = Variable(torch.Tensor(input_data))\n data_gen = generator(noise)\n loss = d_loss(discriminator(data_gen), discriminator(inp_data))\n d_optimizer.zero_grad()\n loss.backward()\n d_optimizer.step()\n \n # Optimize G\n for _ in range(k_g):\n # Sample noise\n noise = Variable(torch.Tensor(sample_noise(len(input_data))))\n \n # Do an update\n data_gen = generator(noise)\n loss = g_loss(discriminator(data_gen))\n g_optimizer.zero_grad()\n loss.backward()\n g_optimizer.step()\n \n # Visualize\n plt.clf()\n vis_data(data); vis_g(); vis_d()\n display.clear_output(wait=True)\n display.display(plt.gcf())\n\n \nexcept KeyboardInterrupt:\n pass",
"_____no_output_____"
]
],
[
[
"# Describe your findings here",
"_____no_output_____"
],
[
"A ya tomat. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d0d67f607f21bc65781c5141dab1231a8410eea5 | 133,981 | ipynb | Jupyter Notebook | byoyo2018-giris-1.ipynb | ardakdemir/notes | 4f7f2059622218210d3f5ab867197e4ece086128 | [
"MIT"
] | null | null | null | byoyo2018-giris-1.ipynb | ardakdemir/notes | 4f7f2059622218210d3f5ab867197e4ece086128 | [
"MIT"
] | 1 | 2018-07-12T09:38:40.000Z | 2018-07-12T09:38:40.000Z | byoyo2018-giris-1.ipynb | ardakdemir/notes | 4f7f2059622218210d3f5ab867197e4ece086128 | [
"MIT"
] | null | null | null | 106.249802 | 20,364 | 0.818638 | [
[
[
"BYÖYO 2018\n\nYapay Öğrenmeye Giriş I\n\nAli Taylan Cemgil\n\n\n2 Temmuz 2018",
"_____no_output_____"
],
[
"# Parametrik Regresyon, Parametrik Fonksyon Oturtma Problemi (Parametric Regression, Function Fitting)\n\n\nVerilen girdi ve çıktı ikilileri $x, y$ için parametrik bir fonksyon $f$ oturtma problemi. \n\nParametre $w$ değerlerini öyle bir seçelim ki \n$$\ny \\approx f(x; w)\n$$\n\n$x$: Girdi (Input)\n\n$y$: Çıktı (Output)\n\n$w$: Parametre (Weight, ağırlık)\n\n$e$: Hata\n\nÖrnek 1: \n$$\ne = y - f(x)\n$$\n\nÖrnek 2:\n$$\ne = \\frac{y}{f(x)}-1\n$$\n\n$E$, $D$: Hata fonksyonu (Error function), Iraksay (Divergence)\n\n\n\n# Doğrusal Regresyon (Linear Regression)\n\nOturtulacak $f$ fonksyonun **model parametreleri** $w$ cinsinden doğrusal olduğu durum (Girdiler $x$ cinsinden doğrusal olması gerekmez). \n\n## Tanım: Doğrusallık\nBir $g$ fonksyonu doğrusaldır demek, herhangi skalar $a$ ve $b$ içn\n$$\ng(aw_1 + b w_2) = a g(w_1) + b g(w_2)\n$$\nolması demektir.\n\n\n\n\n",
"_____no_output_____"
],
[
"## Örnek: Doğru oturtmak (Line Fitting)\n\n* Girdi-Çıktı ikilileri\n$$\n(x_i, y_i)\n$$\n$i=1\\dots N$ \n\n* Model\n$$\ny_i \\approx f(x; w_1, w_0) = w_0 + w_1 x \n$$\n\n\n> $x$ : Girdi \n\n> $w_1$: Eğim\n\n> $w_0$: Kesişme\n\n$f_i \\equiv f(x_i; w_1, w_0)$\n\n## Örnek 2: Parabol Oturtma\n\n* Girdi-Çıktı ikilileri\n$$\n(x_i, y_i)\n$$\n$i=1\\dots N$ \n\n* Model\n$$\ny_i \\approx f(x_i; w_2, w_1, w_0) = w_0 + w_1 x_i + w_2 x_i^2\n$$\n\n\n> $x$ : Girdi \n\n> $w_2$: Karesel terimin katsayısı \n\n> $w_1$: Doğrusal terimin katsayısı\n\n> $w_0$: Sabit terim katsayısı\n\n$f_i \\equiv f(x_i; w_2, w_1, w_0)$\n\nBir parabol $x$'in doğrusal fonksyonu değil ama $w_2, w_1, w_0$ parametrelerinin doğrusal fonksyonu.\n",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline\nfrom __future__ import print_function\nfrom ipywidgets import interact, interactive, fixed\nimport ipywidgets as widgets\nimport matplotlib.pylab as plt\nfrom IPython.display import clear_output, display, HTML\n\nx = np.array([8.0 , 6.1 , 11., 7., 9., 12. , 4., 2., 10, 5, 3])\ny = np.array([6.04, 4.95, 5.58, 6.81, 6.33, 7.96, 5.24, 2.26, 8.84, 2.82, 3.68])\n\ndef plot_fit(w1, w0):\n f = w0 + w1*x\n\n plt.figure(figsize=(4,3))\n plt.plot(x,y,'sk')\n plt.plot(x,f,'o-r')\n #plt.axis('equal')\n plt.xlim((0,15))\n plt.ylim((0,10))\n for i in range(len(x)):\n plt.plot((x[i],x[i]),(f[i],y[i]),'b')\n# plt.show()\n# plt.figure(figsize=(4,1))\n plt.bar(x,(f-y)**2/2)\n plt.title('Toplam kare hata = '+str(np.sum((f-y)**2/2)))\n plt.ylim((0,10))\n plt.xlim((0,15))\n plt.show()\n \nplot_fit(0.0,3.79)",
"_____no_output_____"
],
[
"interact(plot_fit, w1=(-2, 2, 0.01), w0=(-5, 5, 0.01));",
"_____no_output_____"
]
],
[
[
"Gerçek veri: Türkiyedeki araç sayıları",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nimport scipy as sc\nimport numpy as np\nimport pandas as pd\nimport matplotlib as mpl\nimport matplotlib.pylab as plt\n\ndf_arac = pd.read_csv(u'data/arac.csv',sep=';')\ndf_arac[['Year','Car']]\n#df_arac",
"_____no_output_____"
],
[
"BaseYear = 1995\nx = np.matrix(df_arac.Year[0:]).T-BaseYear\ny = np.matrix(df_arac.Car[0:]).T/1000000.\n\nplt.plot(x+BaseYear, y, 'o-')\nplt.xlabel('Yil')\nplt.ylabel('Araba (Milyon)')\n\nplt.show()",
"_____no_output_____"
],
[
"%matplotlib inline\nfrom __future__ import print_function\nfrom ipywidgets import interact, interactive, fixed\nimport ipywidgets as widgets\nimport matplotlib.pylab as plt\nfrom IPython.display import clear_output, display, HTML\n\n\nw_0 = 0.27150786\nw_1 = 0.37332256\n\nBaseYear = 1995\nx = np.matrix(df_arac.Year[0:]).T-BaseYear\ny = np.matrix(df_arac.Car[0:]).T/1000000.\n\nfig, ax = plt.subplots()\n\nf = w_1*x + w_0\nplt.plot(x+BaseYear, y, 'o-')\nln, = plt.plot(x+BaseYear, f, 'r')\n\n\nplt.xlabel('Years')\nplt.ylabel('Number of Cars (Millions)')\nax.set_ylim((-2,13))\nplt.close(fig)\n\ndef set_line(w_1, w_0):\n\n f = w_1*x + w_0\n e = y - f\n\n ln.set_ydata(f)\n ax.set_title('Total Error = {} '.format(np.asscalar(e.T*e/2)))\n display(fig)\n\nset_line(0.32,3)",
"_____no_output_____"
],
[
"interact(set_line, w_1=(-2, 2, 0.01), w_0=(-5, 5, 0.01));",
"_____no_output_____"
],
[
"w_0 = 0.27150786\nw_1 = 0.37332256\nw_2 = 0.1\n\nBaseYear = 1995\nx = np.array(df_arac.Year[0:]).T-BaseYear\ny = np.array(df_arac.Car[0:]).T/1000000.\n\nfig, ax = plt.subplots()\n\nf = w_2*x**2 + w_1*x + w_0\nplt.plot(x+BaseYear, y, 'o-')\nln, = plt.plot(x+BaseYear, f, 'r')\n\n\nplt.xlabel('Yıl')\nplt.ylabel('Araba Sayısı (Milyon)')\nax.set_ylim((-2,13))\nplt.close(fig)\n\ndef set_line(w_2, w_1, w_0):\n f = w_2*x**2 + w_1*x + w_0\n e = y - f\n ln.set_ydata(f)\n ax.set_title('Ortalama Kare Hata = {} '.format(np.sum(e*e/len(e))))\n display(fig)\n\nset_line(w_2, w_1, w_0)",
"_____no_output_____"
],
[
"interact(set_line, w_2=(-0.1,0.1,0.001), w_1=(-2, 2, 0.01), w_0=(-5, 5, 0.01))",
"_____no_output_____"
]
],
[
[
"## Örnek 1, devam: Modeli Öğrenmek\n\n* Öğrenmek: parametre kestirimi $w = [w_0, w_1]$\n\n* Genelde model veriyi hatasız açıklayamayacağı için her veri noktası için bir hata tanımlıyoruz:\n\n$$e_i = y_i - f(x_i; w)$$\n\n* Toplam kare hata \n\n$$\nE(w) = \\frac{1}{2} \\sum_i (y_i - f(x_i; w))^2 = \\frac{1}{2} \\sum_i e_i^2\n$$\n\n* Toplam kare hatayı $w_0$ ve $w_1$ parametrelerini değiştirerek azaltmaya çalışabiliriz.\n\n* Hata yüzeyi ",
"_____no_output_____"
]
],
[
[
"from itertools import product\n\nBaseYear = 1995\nx = np.matrix(df_arac.Year[0:]).T-BaseYear\ny = np.matrix(df_arac.Car[0:]).T/1000000.\n\n# Setup the vandermonde matrix\nN = len(x)\nA = np.hstack((np.ones((N,1)), x))\n\nleft = -5\nright = 15\nbottom = -4\ntop = 6\nstep = 0.05\nW0 = np.arange(left,right, step)\nW1 = np.arange(bottom,top, step)\n\nErrSurf = np.zeros((len(W1),len(W0)))\n\nfor i,j in product(range(len(W1)), range(len(W0))):\n e = y - A*np.matrix([W0[j], W1[i]]).T\n ErrSurf[i,j] = e.T*e/2\n\nplt.figure(figsize=(7,7))\nplt.imshow(ErrSurf, interpolation='nearest', \n vmin=0, vmax=1000,origin='lower',\n extent=(left,right,bottom,top),cmap='Blues_r')\nplt.xlabel('w0')\nplt.ylabel('w1')\nplt.title('Error Surface')\nplt.colorbar(orientation='horizontal')\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Modeli Nasıl Kestirebiliriz?\n\n## Fikir: En küçük kare hata \n(Gauss 1795, Legendre 1805)\n\n* Toplam hatanın $w_0$ ve $w_1$'e göre türevini hesapla, sıfıra eşitle ve çıkan denklemleri çöz\n\n\n\n\\begin{eqnarray}\n\\left(\n\\begin{array}{c}\ny_0 \\\\ y_1 \\\\ \\vdots \\\\ y_{N-1} \n\\end{array}\n\\right)\n\\approx\n\\left(\n\\begin{array}{cc}\n1 & x_0 \\\\ 1 & x_1 \\\\ \\vdots \\\\ 1 & x_{N-1} \n\\end{array}\n\\right) \n\\left(\n\\begin{array}{c}\n w_0 \\\\ w_1 \n\\end{array}\n\\right)\n\\end{eqnarray}\n\n\\begin{eqnarray}\ny \\approx A w\n\\end{eqnarray}\n\n> $A = A(x)$: Model Matrisi\n\n> $w$: Model Parametreleri\n\n> $y$: Gözlemler\n\n* Hata vektörü: $$e = y - Aw$$\n\n\\begin{eqnarray}\nE(w) & = & \\frac{1}{2}e^\\top e = \\frac{1}{2}(y - Aw)^\\top (y - Aw)\\\\\n& = & \\frac{1}{2}y^\\top y - \\frac{1}{2} y^\\top Aw - \\frac{1}{2} w^\\top A^\\top y + \\frac{1}{2} w^\\top A^\\top Aw \\\\\n& = & \\frac{1}{2} y^\\top y - y^\\top Aw + \\frac{1}{2} w^\\top A^\\top Aw \\\\\n\\end{eqnarray}\n\n### Gradyan\nhttps://tr.khanacademy.org/math/multivariable-calculus/multivariable-derivatives/partial-derivative-and-gradient-articles/a/the-gradient\n\n\\begin{eqnarray}\n\\frac{d E}{d w } & = & \\left(\\begin{array}{c}\n \\partial E/\\partial w_0 \\\\ \\partial E/\\partial w_1 \\\\ \\vdots \\\\ \\partial E/\\partial w_{K-1}\n\\end{array}\\right)\n\\end{eqnarray}\n \nToplam hatanın gradyanı\n\\begin{eqnarray}\n\\frac{d}{d w }E(w) & = & \\frac{d}{d w }(\\frac{1}{2} y^\\top y) &+ \\frac{d}{d w }(- y^\\top Aw) &+ \\frac{d}{d w }(\\frac{1}{2} w^\\top A^\\top Aw) \\\\\n& = & 0 &- A^\\top y &+ A^\\top A w \\\\\n& = & - A^\\top (y - Aw) \\\\\n& = & - A^\\top e \\\\\n& \\equiv & \\nabla E(w)\n\\end{eqnarray}\n\n### Yapay zekaya gönül veren herkesin bilmesi gereken eşitlikler\n* Vektör iç çarpımının gradyeni\n\\begin{eqnarray}\n\\frac{d}{d w }(h^\\top w) & = & h\n\\end{eqnarray}\n\n* Karesel bir ifadenin gradyeni\n\\begin{eqnarray}\n\\frac{d}{d w }(w^\\top K w) & = & (K+K^\\top) w\n\\end{eqnarray}\n\n\n### En küçük kare hata çözümü doğrusal modellerde doğrusal denklemlerin çözümü ile bulunabiliyor\n\n\n\\begin{eqnarray}\nw^* & = & \\arg\\min_{w} E(w)\n\\end{eqnarray}\n\n* Eniyileme Şartı (gradyan sıfır olmalı )\n\n\\begin{eqnarray}\n\\nabla E(w^*) & = & 0\n\\end{eqnarray}\n\n\\begin{eqnarray}\n0 & = & - A^\\top y + A^\\top A w^* \\\\\nA^\\top y & = & A^\\top A w^* \\\\\nw^* & = & (A^\\top A)^{-1} A^\\top y \n\\end{eqnarray}\n\n* Geometrik (Projeksyon) yorumu:\n\n\\begin{eqnarray}\nf & = A w^* = A (A^\\top A)^{-1} A^\\top y \n\\end{eqnarray}\n\n",
"_____no_output_____"
]
],
[
[
"# Solving the Normal Equations\n\n# Setup the Design matrix\nN = len(x)\nA = np.hstack((np.ones((N,1)), x))\n\n#plt.imshow(A, interpolation='nearest')\n# Solve the least squares problem\nw_ls,E,rank,sigma = np.linalg.lstsq(A, y)\n\nprint('Parametreler: \\nw0 = ', w_ls[0],'\\nw1 = ', w_ls[1] )\nprint('Toplam Kare Hata:', E/2)\n\nf = np.asscalar(w_ls[1])*x + np.asscalar(w_ls[0])\nplt.plot(x+BaseYear, y, 'o-')\nplt.plot(x+BaseYear, f, 'r')\n\n\nplt.xlabel('Yıl')\nplt.ylabel('Araba sayısı (Milyon)')\nplt.show()",
"Parametreler: \nw0 = [[ 4.13258253]] \nw1 = [[ 0.20987778]]\nToplam Kare Hata: [[ 37.19722385]]\n"
]
],
[
[
"## Polinomlar \n\n\n### Parabol\n\\begin{eqnarray}\n\\left(\n\\begin{array}{c}\ny_0 \\\\ y_1 \\\\ \\vdots \\\\ y_{N-1} \n\\end{array}\n\\right)\n\\approx\n\\left(\n\\begin{array}{ccc}\n1 & x_0 & x_0^2 \\\\ 1 & x_1 & x_1^2 \\\\ \\vdots \\\\ 1 & x_{N-1} & x_{N-1}^2 \n\\end{array}\n\\right) \n\\left(\n\\begin{array}{c}\n w_0 \\\\ w_1 \\\\ w_2\n\\end{array}\n\\right)\n\\end{eqnarray}\n\n### $K$ derecesinde polinom\n\\begin{eqnarray}\n\\left(\n\\begin{array}{c}\ny_0 \\\\ y_1 \\\\ \\vdots \\\\ y_{N-1} \n\\end{array}\n\\right)\n\\approx\n\\left(\n\\begin{array}{ccccc}\n1 & x_0 & x_0^2 & \\dots & x_0^K \\\\ 1 & x_1 & x_1^2 & \\dots & x_1^K\\\\ \\vdots \\\\ 1 & x_{N-1} & x_{N-1}^2 & \\dots & x_{N-1}^K \n\\end{array}\n\\right) \n\\left(\n\\begin{array}{c}\n w_0 \\\\ w_1 \\\\ w_2 \\\\ \\vdots \\\\ w_K\n\\end{array}\n\\right)\n\\end{eqnarray}\n\n\n\\begin{eqnarray}\ny \\approx A w\n\\end{eqnarray}\n\n> $A = A(x)$: Model matrisi \n\n> $w$: Model Parametreleri\n\n> $y$: Gözlemler\n\nPolinom oturtmada ortaya çıkan özel yapılı matrislere __Vandermonde__ matrisleri de denmektedir.",
"_____no_output_____"
]
],
[
[
"x = np.array([10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5])\nN = len(x)\nx = x.reshape((N,1))\ny = np.array([8.04, 6.95, 7.58, 8.81, 8.33, 9.96, 7.24, 4.26, 10.84, 4.82, 5.68]).reshape((N,1))\n#y = np.array([9.14, 8.14, 8.74, 8.77, 9.26, 8.10, 6.13, 3.10, 9.13, 7.26, 4.74]).reshape((N,1))\n#y = np.array([7.46, 6.77, 12.74, 7.11, 7.81, 8.84, 6.08, 5.39, 8.15, 6.42, 5.73]).reshape((N,1))\n\ndef fit_and_plot_poly(degree):\n\n #A = np.hstack((np.power(x,0), np.power(x,1), np.power(x,2)))\n A = np.hstack((np.power(x,i) for i in range(degree+1)))\n # Setup the vandermonde matrix\n xx = np.matrix(np.linspace(np.asscalar(min(x))-1,np.asscalar(max(x))+1,300)).T\n A2 = np.hstack((np.power(xx,i) for i in range(degree+1)))\n\n #plt.imshow(A, interpolation='nearest')\n # Solve the least squares problem\n w_ls,E,rank,sigma = np.linalg.lstsq(A, y)\n f = A2*w_ls\n plt.plot(x, y, 'o')\n plt.plot(xx, f, 'r')\n\n plt.xlabel('x')\n plt.ylabel('y')\n\n plt.gca().set_ylim((0,20))\n #plt.gca().set_xlim((1950,2025))\n \n if E:\n plt.title('Mertebe = '+str(degree)+' Hata='+str(E[0]))\n else:\n plt.title('Mertebe = '+str(degree)+' Hata= 0')\n \n plt.show()\n\nfit_and_plot_poly(0)",
"_____no_output_____"
],
[
"interact(fit_and_plot_poly, degree=(0,10))",
"_____no_output_____"
]
],
[
[
"Overfit: Aşırı uyum",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d0d685821ac04058a272f6da0a803f230912fdfd | 8,796 | ipynb | Jupyter Notebook | batch/xx_misc.ipynb | mutazag/cv | f5693772bda4e2611808d862756bd9234f02176e | [
"MIT"
] | 1 | 2020-08-06T12:03:40.000Z | 2020-08-06T12:03:40.000Z | batch/xx_misc.ipynb | mutazag/cv | f5693772bda4e2611808d862756bd9234f02176e | [
"MIT"
] | null | null | null | batch/xx_misc.ipynb | mutazag/cv | f5693772bda4e2611808d862756bd9234f02176e | [
"MIT"
] | 1 | 2020-08-10T07:56:24.000Z | 2020-08-10T07:56:24.000Z | 32.21978 | 1,062 | 0.557526 | [
[
[
"import json\nimport azureml.core\nfrom azureml.core import Workspace, Datastore, Dataset, Environment, Experiment\nfrom azureml.data import FileDataset\nfrom azureml.data.dataset_consumption_config import DatasetConsumptionConfig\nfrom azureml.pipeline.core import Pipeline, PipelineData, PipelineParameter, PipelineRun\nfrom azureml.core.compute import ComputeTarget, AmlCompute\n\n\nprint(azureml.core.VERSION)\nversion = dict(zip(['major','minor','patch'], azureml.core.VERSION.split('.')))\nws = Workspace.from_config()",
"1.10.0\n"
]
],
[
[
"\n# get data set by id",
"_____no_output_____"
]
],
[
[
"saved_id = 'b3d66173-5608-41b4-b4d4-4b7bd188a2ee'\nDataset.get_by_id(workspace=ws, id=saved_id)",
"_____no_output_____"
]
],
[
[
"# enumerate datastores and datasets",
"_____no_output_____"
]
],
[
[
"print('>>>>> datastores')\nfor i,ds in enumerate(ws.datastores): \n print(i, ds)\n\nprint('>>>>> datasets using datasets collection') \nfor i, dataset in enumerate(ws.datasets):\n print(i, dataset)\n\n\nprint('>>>>> datasets using get_all') \nfor i, dataset in enumerate(Dataset.get_all(workspace=ws)):\n print(i, dataset)",
">>>>> datastores\n0 images_datastore\n1 azureml_globaldatasets\n2 godzilla\n3 workspacefilestore\n4 workspaceblobstore\n>>>>> datasets using datasets collection\n0 label_ds\n1 input_images\n2 anpr_images\n3 ojsalesdata\n>>>>> datasets using get_all\n0 label_ds\n1 input_images\n2 anpr_images\n3 ojsalesdata\n"
],
[
"print([dataset for dataset in ])",
"_____no_output_____"
],
[
"# dataset_id = '69bfe260-7f14-4de7-a33b-7bf894858e4c'\n\ndatastore_o = ws.datastores['godzilla']\n\ndataset_o = Dataset.get_by_name(workspace=ws, name='anpr_images')\n\ndataset_o.to_path()\n",
"_____no_output_____"
],
[
"dsconfig = DatasetConsumptionConfig('ds_config',dataset_o).as_mount()\ndsconfig.dataset",
"_____no_output_____"
],
[
"dsconfig.dataset",
"_____no_output_____"
],
[
"import pandas as pd \nimport os\nimport base64\nfrom pathlib import Path \n\nfilename = dataset_o.to_path()\n\ndf = pd.DataFrame({'filename': filename})\n\ndf['basename'] = df.filename.apply(lambda fn: os.path.basename(fn))\ndf['dirname'] = df.filename.apply(lambda fn: os.path.dirname(fn))\ndf['encoded'] = df.filename.apply(lambda fn: base64.b64encode(fn.encode()))",
"_____no_output_____"
],
[
"df.query('dirname==\"/2020/08/10\"').to_csv('20200810_images.csv')",
"_____no_output_____"
],
[
"df.encoded.apply(lambda _: base64.b64decode(_).decode())",
"_____no_output_____"
],
[
"str.encode",
"_____no_output_____"
]
],
[
[
"# pipelines",
"_____no_output_____"
]
],
[
[
"exp = Experiment(workspace=ws, name='MAG-batch-paramdataset')\n",
"_____no_output_____"
],
[
"exp.runs",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0d68afe0228431d0893d50a276f1b6d515a3a2a | 64,705 | ipynb | Jupyter Notebook | lgb__model_training.ipynb | African-Quant/WQU_MScFE_Capstone_Grp9 | 464642f5dfa8c018361f51edd0dd1cf883379d60 | [
"MIT"
] | null | null | null | lgb__model_training.ipynb | African-Quant/WQU_MScFE_Capstone_Grp9 | 464642f5dfa8c018361f51edd0dd1cf883379d60 | [
"MIT"
] | null | null | null | lgb__model_training.ipynb | African-Quant/WQU_MScFE_Capstone_Grp9 | 464642f5dfa8c018361f51edd0dd1cf883379d60 | [
"MIT"
] | null | null | null | 45.438904 | 255 | 0.321783 | [
[
[
"<a href=\"https://colab.research.google.com/github/African-Quant/WQU_MScFE_Capstone_Grp9/blob/master/lgb__model_training.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"!pip install -Uqq fastbook --quiet\n!pip install pyfolio --quiet\n\nimport fastbook\n",
"\u001b[K |████████████████████████████████| 727kB 6.9MB/s \n\u001b[K |████████████████████████████████| 194kB 8.6MB/s \n\u001b[K |████████████████████████████████| 1.2MB 14.4MB/s \n\u001b[K |████████████████████████████████| 51kB 7.7MB/s \n\u001b[K |████████████████████████████████| 776.8MB 23kB/s \n\u001b[K |████████████████████████████████| 61kB 8.9MB/s \n\u001b[K |████████████████████████████████| 12.8MB 48.8MB/s \n\u001b[K |████████████████████████████████| 51kB 8.4MB/s \n\u001b[31mERROR: torchtext 0.9.1 has requirement torch==1.8.1, but you'll have torch 1.7.1 which is incompatible.\u001b[0m\n\u001b[K |████████████████████████████████| 92kB 5.8MB/s \n\u001b[K |████████████████████████████████| 61kB 6.8MB/s \n\u001b[?25h Building wheel for pyfolio (setup.py) ... \u001b[?25l\u001b[?25hdone\n Building wheel for empyrical (setup.py) ... \u001b[?25l\u001b[?25hdone\n"
],
[
"import os\nimport re\n \nimport random\nimport numpy as np\nfrom sklearn.metrics import accuracy_score, confusion_matrix, classification_report\nfrom sklearn.model_selection import RandomizedSearchCV, GridSearchCV\nimport pandas as pd\nfrom pylab import mpl, plt\nplt.style.use('seaborn')\nmpl.rcParams['font.family'] = 'serif'\nos.environ['PYTHONHASHSEED'] = '0'\n\nimport warnings\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
],
[
"from fastbook import *\n\nfrom pandas.api.types import is_string_dtype, is_numeric_dtype, is_categorical_dtype\nfrom fastai.tabular.all import *\nfrom sklearn.ensemble import RandomForestClassifier\nfrom lightgbm import LGBMClassifier\nfrom xgboost import XGBClassifier\nfrom pyfolio.timeseries import perf_stats \nfrom pyfolio import create_simple_tear_sheet, create_returns_tear_sheet",
"_____no_output_____"
],
[
"pairs = ['AUDCAD', 'AUDCHF', 'AUDJPY', 'AUDNZD', 'AUDUSD', 'CAD', 'CADCHF', \n 'CADJPY', 'CHF', 'CHFJPY', 'EURAUD', 'EURCAD', 'EURCHF', 'EURGBP', \n 'EURJPY', 'EURNZD', 'EURUSD', 'GBPAUD', 'GBPCAD', 'GBPCHF', 'GBPJPY', \n 'GBPNZD', 'GBPUSD', 'JPY', 'NZDCAD', 'NZDCHF', 'NZDJPY', 'NZDUSD']\n\ndef get_data(pair):\n ''' Retrieves (from a github repo) and prepares the data.\n '''\n url = f'https://raw.githubusercontent.com/African-Quant/WQU_MScFE_Capstone_Grp9/master/Datasets/{pair}%3DX.csv'\n raw = pd.read_csv(url)\n raw = pd.DataFrame(raw).drop(['Adj Close', 'Volume'], axis=1)\n raw.iloc[:,0] = pd.to_datetime(raw.iloc[:,0])\n raw.set_index('Date', inplace=True)\n return raw",
"_____no_output_____"
],
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Mounted at /content/drive\n"
],
[
"path = '/content/drive/MyDrive/Capstone'\n\nos.chdir(path)",
"_____no_output_____"
],
[
"from models import *",
"_____no_output_____"
]
],
[
[
"### Using *Light Gradient Boosting* method to Predict Market Direction.",
"_____no_output_____"
]
],
[
[
"lgb_results_valid, lgb_results_test = lgb_results()",
"_____no_output_____"
],
[
"lgb_results_valid",
"_____no_output_____"
],
[
"lgb_results_test",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0d6a60ee06799e5a3cceeb09ad79cae15f616dc | 8,110 | ipynb | Jupyter Notebook | notebooks/05_Kubeflow_Pipeline/05_01_Pipeline_SDK.ipynb | PatrickXYS/eks-kubeflow-workshop | 99520eb09da86c3ebbdeddc6c9cac57abcf7f05c | [
"Apache-2.0"
] | null | null | null | notebooks/05_Kubeflow_Pipeline/05_01_Pipeline_SDK.ipynb | PatrickXYS/eks-kubeflow-workshop | 99520eb09da86c3ebbdeddc6c9cac57abcf7f05c | [
"Apache-2.0"
] | null | null | null | notebooks/05_Kubeflow_Pipeline/05_01_Pipeline_SDK.ipynb | PatrickXYS/eks-kubeflow-workshop | 99520eb09da86c3ebbdeddc6c9cac57abcf7f05c | [
"Apache-2.0"
] | 3 | 2019-12-07T16:08:24.000Z | 2021-12-01T06:16:34.000Z | 29.816176 | 366 | 0.591739 | [
[
[
"## Introduction to the Pipelines SDK",
"_____no_output_____"
],
[
"The [Kubeflow Pipelines SDK](https://github.com/kubeflow/pipelines/tree/master/sdk) provides a set of Python packages that you can use to specify and run your machine learning (ML) workflows. A pipeline is a description of an ML workflow, including all of the components that make up the steps in the workflow and how the components interact with each other.\n",
"_____no_output_____"
],
[
"Kubeflow website has a very detail expaination of kubeflow components, please go to [Introduction to the Pipelines SDK](https://www.kubeflow.org/docs/pipelines/sdk/sdk-overview/) for details",
"_____no_output_____"
],
[
"## Install the Kubeflow Pipelines SDK",
"_____no_output_____"
],
[
"This guide tells you how to install the [Kubeflow Pipelines SDK](https://github.com/kubeflow/pipelines/tree/master/sdk) which you can use to build machine learning pipelines. You can use the SDK to execute your pipeline, or alternatively you can upload the pipeline to the Kubeflow Pipelines UI for execution.\n\nAll of the SDK’s classes and methods are described in the auto-generated [SDK reference docs](https://kubeflow-pipelines.readthedocs.io/en/latest/).\n",
"_____no_output_____"
],
[
"Run the following command to install the Kubeflow Pipelines SDK\n",
"_____no_output_____"
]
],
[
[
"!pip install kfp --upgrade --user",
"_____no_output_____"
]
],
[
[
"After successful installation, the command `dsl-compile` should be available. You can use this command to verify it",
"_____no_output_____"
]
],
[
[
"!which dsl-compile",
"_____no_output_____"
]
],
[
[
"> Note: Please check official documentation to understand Pipline concetps before your move forward. [Introduction to Pipelines SDK](https://www.kubeflow.org/docs/pipelines/sdk/sdk-overview/)",
"_____no_output_____"
],
[
"\n",
"_____no_output_____"
],
[
"## Build simple components and pipelines",
"_____no_output_____"
],
[
"In this example, we want to calculate sum of three numbers. \n\n1. Let's assume we have a python image to use. It accepts two arguments and return sum of them. \n\n2. The sum of a and b will be used to calculate final result with sum of c and d. In total, we will have three arithmetical operators. Then we use another echo operator to print the result. ",
"_____no_output_____"
],
[
"### 1. Create a container image for each component\n\nAssumes that you have already created a program to perform the task required in a particular step of your ML workflow. For example, if the task is to train an ML model, then you must have a program that does the training,\n\nYour component can create `outputs` that the downstream components can use as `inputs`. This will be used to build Job Directed Acyclic Graph (DAG)\n",
"_____no_output_____"
],
[
"> In this case, we will use a python base image to do the calculation. We skip buiding our own image.",
"_____no_output_____"
],
[
"### 2. Create a Python function to wrap your component\n\nDefine a Python function to describe the interactions with the Docker container image that contains your pipeline component.\n\nHere, in order to simplify the process, we use simple way to calculate sum. Ideally, you need to build a new container image for your code change.",
"_____no_output_____"
]
],
[
[
"import kfp\nfrom kfp import dsl\n\ndef add_two_numbers(a, b):\n return dsl.ContainerOp(\n name='calculate_sum',\n image='python:3.6.8',\n command=['python', '-c'],\n arguments=['with open(\"/tmp/results.txt\", \"a\") as file: file.write(str({} + {}))'.format(a, b)],\n file_outputs={\n 'data': '/tmp/results.txt',\n }\n )\n\ndef echo_op(text):\n return dsl.ContainerOp(\n name='echo',\n image='library/bash:4.4.23',\n command=['sh', '-c'],\n arguments=['echo \"Result: {}\"'.format(text)]\n )",
"_____no_output_____"
]
],
[
[
"### 3. Define your pipeline as a Python function\n\nDescribe each pipeline as a Python function.",
"_____no_output_____"
]
],
[
[
"@dsl.pipeline(\n name='Calcualte sum pipeline',\n description='Calculate sum of numbers and prints the result.'\n)\ndef calculate_sum(\n a=7,\n b=10,\n c=4,\n d=7\n):\n \"\"\"A four-step pipeline with first two running in parallel.\"\"\"\n\n sum1 = add_two_numbers(a, b)\n sum2 = add_two_numbers(c, d)\n sum = add_two_numbers(sum1.output, sum2.output)\n\n echo_task = echo_op(sum.output)",
"_____no_output_____"
]
],
[
[
"### 4. Compile the pipeline\n\nCompile the pipeline to generate a compressed YAML definition of the pipeline. The Kubeflow Pipelines service converts the static configuration into a set of Kubernetes resources for execution.\n\nThere are two ways to compile the pipeline. Either use python lib `kfp.compiler.Compiler.compile ` or use binary `dsl-compile` command.",
"_____no_output_____"
]
],
[
[
"kfp.compiler.Compiler().compile(calculate_sum, 'calculate-sum-pipeline.zip')",
"_____no_output_____"
],
[
"# If you have a python file, you can also try build pipeline using `dsl-compile` command.\n# dsl-compile --py [path/to/python/file] --output my-pipeline.zip",
"_____no_output_____"
]
],
[
[
"### 5. Deploy pipeline\n\nThere're two ways to deploy the pipeline. Either upload the generate `.tar.gz` file through the `Kubeflow Pipelines UI`, or use `Kubeflow Pipeline SDK` to deploy it.\n\nWe will only show sdk usage here.",
"_____no_output_____"
]
],
[
[
"client = kfp.Client()\naws_experiment = client.create_experiment(name='aws')\nmy_run = client.run_pipeline(aws_experiment.id, 'calculate-sum-pipeline', \n 'calculate-sum-pipeline.zip')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0d6ab88a11d7329bc0897d23e39be62632c0c32 | 1,663 | ipynb | Jupyter Notebook | Untitled.ipynb | Xandr2410/jupyter_lab | e18e0e70a9f9c4e5fbab495afd317073d6ce5f06 | [
"MIT"
] | null | null | null | Untitled.ipynb | Xandr2410/jupyter_lab | e18e0e70a9f9c4e5fbab495afd317073d6ce5f06 | [
"MIT"
] | null | null | null | Untitled.ipynb | Xandr2410/jupyter_lab | e18e0e70a9f9c4e5fbab495afd317073d6ce5f06 | [
"MIT"
] | null | null | null | 29.175439 | 148 | 0.612748 | [
[
[
"pip install requests",
"Requirement already satisfied: requests in /srv/conda/envs/notebook/lib/python3.7/site-packages (2.27.1)\nRequirement already satisfied: charset-normalizer~=2.0.0 in /srv/conda/envs/notebook/lib/python3.7/site-packages (from requests) (2.0.10)\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /srv/conda/envs/notebook/lib/python3.7/site-packages (from requests) (1.26.8)\nRequirement already satisfied: certifi>=2017.4.17 in /srv/conda/envs/notebook/lib/python3.7/site-packages (from requests) (2021.10.8)\nRequirement already satisfied: idna<4,>=2.5 in /srv/conda/envs/notebook/lib/python3.7/site-packages (from requests) (3.3)\nNote: you may need to restart the kernel to use updated packages.\n"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d0d6b1b153a5e97eb785ee48d2035459739e3f78 | 5,940 | ipynb | Jupyter Notebook | Exercise 3/Exercise_3.ipynb | jantiegges/Machine-Intelligence-I | 05fa333bea9abe75648cec007bcb99eeacd367a5 | [
"MIT"
] | null | null | null | Exercise 3/Exercise_3.ipynb | jantiegges/Machine-Intelligence-I | 05fa333bea9abe75648cec007bcb99eeacd367a5 | [
"MIT"
] | null | null | null | Exercise 3/Exercise_3.ipynb | jantiegges/Machine-Intelligence-I | 05fa333bea9abe75648cec007bcb99eeacd367a5 | [
"MIT"
] | null | null | null | 28.695652 | 96 | 0.450168 | [
[
[
"import numpy as np",
"_____no_output_____"
],
[
"class MultiLayerPerceptron():\n \n def __init__(self):\n self.inputLayer = 1\n self.hiddenLayer = np.array([3])\n self.outputLayer = [1]\n self.learningRate = 0.5\n self.weights = self.starting_weights()\n self.bias = self.starting_bias()\n self.transferFunc = np.tanh\n self.outputFunc = lambda x: 1*x\n \n def starting_weights(self):\n # init weights\n \n weightsTmp = []\n \n # input layer\n x = np.random.uniform(-0.5, 0.5, self.hiddenLayer[0]*self.inputLayer)\n weightsTmp.append(x.reshape((self.inputLayer, self.hiddenLayer[0])))\n \n # further hidden layers\n for i in range(1, len(self.hiddenLayer)):\n x = np.random.uniform(-0.5, 0.5, self.hiddenLayer[i]*self.hiddenLayer[i-1])\n weightsTmp.append(x.reshape((self.hiddenLayer[i-1], self.hiddenLayer[i])))\n \n # output layer\n x = np.random.uniform(-0.5, 0.5, self.hiddenLayer[-1]*self.outputLayer[0])\n weightsTmp.append(x.reshape((self.hiddenLayer[-1], self.outputLayer[0])))\n \n return weightsTmp\n \n def starting_bias(self): \n # init bias\n biasTmp = []\n \n # input layer\n x = np.random.uniform(-0.5, 0.5, self.hiddenLayer[0])\n biasTmp.append(x.reshape((1, self.hiddenLayer[0])))\n \n # further hidden layers\n for i in range(1, len(self.hiddenLayer)):\n x = np.random.uniform(-0.5, 0.5, self.hiddenLayer[i])\n biasTmp.append(x.reshape((1, self.hiddenLayer[i])))\n \n # TODO: output bias?\n \n \n return biasTmp\n \n def forward_propagation(self, x):\n # perform forward propagation\n \n totalInput = []\n totalAct = []\n \n # add single dimension\n X = np.expand_dims(x, axis=0)\n \n # input layer\n totalInput.append(np.dot(X, self.weights[0]) + self.bias[0])\n totalAct.append(self.transferFunc(totalInput[-1]))\n \n # hidden layer\n for i in range(1, len(self.hiddenLayer)):\n totalInput.append(np.dot(totalAct[-1], self.weights[i]) + self.bias[i])\n totalAct.append(self.transferFunc(totalInput[-1]))\n \n # output layer\n totalInput.append(np.dot(totalAct[-1], self.weights[-1]))\n totalAct.append(self.outputFunc(totalInput[-1]))\n\n return totalInput, totalAct\n \n def back_propagation(self, x):\n \n \n \n def fit(self, x, y):\n \n pred = []\n for elem in x:\n totalInput, totalAct = self.forward_propagation(elem)\n pred.append(totalAct[-1])\n \n # get rid of dimensions\n pred = np.array(pred)[:,0,0]\n \n outputError = 0.5 * (y - pred)**2\n \n \n \n return(error)\n \n ",
"_____no_output_____"
],
[
"data = np.genfromtxt(\"RegressionData.txt\")\nx = data[:, 0]\ny = data[:, 1]\n\nmlp = MultiLayerPerceptron()\n\n#totalInput, totalAct = mlp.forward_propagation(x)\n\nerror = mlp.fit(x, y)\nprint(error)\n",
"[1.31109168e-01 4.81766322e-03 3.78559007e-04 3.14610745e-02\n 3.10764798e-02 4.31370918e-01 3.79243403e-01 1.36738388e-04\n 2.65899571e-01 7.04206570e-01]\n"
],
[
"data = np.genfromtxt(\"RegressionData.txt\")\nx = data[:, 0]\ny = data[:, 1]\n\nx = np.expand_dims(x, axis=1)\n\nX = np.zeros((10,3), dtype=x.dtype) + x\nprint(X.shape)\n \n \nz = [3]\nprint(z[-1])",
"(10, 3)\n3\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
d0d6b9994dca5cb61b102fae4f1ca73a967acdb9 | 56,369 | ipynb | Jupyter Notebook | assets_datasets/notebook_scratch_datasets.ipynb | msaddler/pitchnet | 8e26034be177deff7447ade7f782a4a9581c2188 | [
"MIT"
] | 6 | 2021-12-21T05:38:03.000Z | 2022-03-31T21:05:56.000Z | assets_datasets/notebook_scratch_datasets.ipynb | msaddler/pitchnet | 8e26034be177deff7447ade7f782a4a9581c2188 | [
"MIT"
] | null | null | null | assets_datasets/notebook_scratch_datasets.ipynb | msaddler/pitchnet | 8e26034be177deff7447ade7f782a4a9581c2188 | [
"MIT"
] | 1 | 2022-03-28T19:33:53.000Z | 2022-03-28T19:33:53.000Z | 42.22397 | 230 | 0.550533 | [
[
[
"import sys\nimport os\nimport h5py\nimport json\nimport numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport IPython.display as ipd\n\nfrom stimuli_f0_labels import get_f0_bins, f0_to_label\n\nfn = '/om4/group/mcdermott/user/msaddler/pitchnet_dataset/pitchnetDataset/assets/data/processed/dataset_2019-11-22-2300/PND_sr32000_v08.hdf5'\n# fn = '/om4/group/mcdermott/user/msaddler/pitchnet_dataset/pitchnetDataset/assets/data/processed/dataset_2019-11-16-2300/PND_sr32000_v07.hdf5'\n# fn = '/om4/group/mcdermott/user/msaddler/pitchnet_dataset/pitchnetDataset/assets/data/processed/dataset_2019-08-16-1200/PND_sr32000_v04.hdf5'\n\nf = h5py.File(fn, 'r')\nfor v in f.values():\n print(v)\nfor v in f['config'].values():\n print(v)\n\nfile_indexes = f['source_file_index'][:]\nsegment_indexes = f['source_file_row'][:]\nf0_values = f['nopad_f0_mean'][:]\nsource_file_encoding_dict = f['config/source_file_encoding_dict'][0]\nsource_file_encoding_dict = source_file_encoding_dict.replace('\"', '\"\"\"')\nsource_file_encoding_dict = source_file_encoding_dict.replace('\\'', '\"')\nsource_file_encoding_dict = json.loads(source_file_encoding_dict)\n\nf.close()\n",
"_____no_output_____"
],
[
"file_index_to_filename_map = {}\nfor key in source_file_encoding_dict.keys():\n file_index_to_filename_map[source_file_encoding_dict[key]] = os.path.basename(key)\n\n\nf0_bins = get_f0_bins()\ndataset_separated_histograms = {}\ndataset_separated_unique_segments = {}\ndataset_separated_total_segments = {}\n\nfor file_index in np.unique(file_indexes):\n f0_values_from_file_idx = f0_values[file_indexes == file_index]\n segment_indexes_from_file_idx = segment_indexes[file_indexes == file_index]\n counts, bins = np.histogram(f0_values_from_file_idx, bins=f0_bins)\n dset_key = file_index_to_filename_map[file_index]\n dataset_separated_histograms[dset_key] = counts\n dataset_separated_unique_segments[dset_key] = np.unique(segment_indexes_from_file_idx).shape[0]\n dataset_separated_total_segments[dset_key] = segment_indexes_from_file_idx.shape[0]\n\ndataset_details = {\n 'RWC': {\n 'key': 'f0_segments_2019AUG16_rwc.hdf5',\n 'plot_kwargs': {'color': [0, 0.8, 0]},\n },\n 'NSYNTH': {\n 'key': 'f0_segments_2019AUG16_nsynth.hdf5',\n 'plot_kwargs': {'color': [0, 0.6, 0]}\n },\n 'WSJ': {\n 'key': 'f0_segments_2019AUG16_wsj.hdf5',\n 'plot_kwargs': {'color': [0.6, 0.6, 0.6]}\n },\n 'SWC': {\n 'key': 'f0_segments_2019AUG16_swc.hdf5',\n 'plot_kwargs': {'color': [0.4, 0.4, 0.4]}\n },\n 'CSLUKIDS': {\n 'key': 'f0_segments_2019NOV16_cslu_kids.hdf5',\n 'plot_kwargs': {'color': [0.2, 0.2, 0.2]}\n },\n 'CMUKIDS': {\n 'key': 'f0_segments_2019NOV22_cmu_kids.hdf5',\n 'plot_kwargs': {'color': [0.8, 0.8, 0.8]}\n },\n}\n\ndataset_list = ['RWC', 'NSYNTH', 'WSJ', 'SWC', 'CSLUKIDS', 'CMUKIDS']\ndataset_list.reverse()\n\nfig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 6))\nx = np.arange(0, len(counts))\nbottom = np.zeros_like(x)\nfor dataset in dataset_list:\n key = dataset_details[dataset]['key']\n if key in dataset_separated_histograms.keys():\n y = dataset_separated_histograms[key]\n plot_kwargs = dataset_details[dataset]['plot_kwargs']\n label = '{:s} ({} total; {} unique; {:.1f} mean repeats)'.format(\n dataset,\n dataset_separated_total_segments[key],\n dataset_separated_unique_segments[key],\n dataset_separated_total_segments[key] / dataset_separated_unique_segments[key])\n ax.fill_between(x, y1=bottom, y2=bottom+y, **plot_kwargs, lw=0, label=label)\n bottom = bottom + y\n else:\n print(key)\n\nax.legend(loc='upper right', framealpha=1, facecolor='w', edgecolor='w', fontsize=10)\nax.set_xlim([x[0], x[-1]])\nax.set_ylim([0, np.max(bottom)])\nax.set_ylabel('Number of stimuli')\nax.set_xlabel('F0 bin (Hz)')\nclass_indexes = np.linspace(x[0], x[-1], 15, dtype=int)\nf0_class_labels = ['{:.0f}'.format(_) for _ in f0_bins[class_indexes]]\nax.set_xticks(class_indexes)\nax.set_xticklabels(f0_class_labels)\nplt.tight_layout()\nplt.show()\n\n# save_dir = '/om2/user/msaddler/pitchnet/assets_psychophysics/figures/archive_2019_12_05_PNDv08_archSearch01/'\n# save_fn = '2019NOV27_PND_v08_dataset_composition.pdf'\n# print(os.path.join(save_dir, save_fn))\n# fig.savefig(os.path.join(save_dir, save_fn), bbox_inches='tight')\n",
"_____no_output_____"
],
[
"# Check how many bins are spanned by speech-only and music-only datasets\nf0_bin_min=80\nf0_bin_max=1e3\nf0_min=80\nf0_max=450.91752190019395\nbinwidth_in_octaves=1/192\nf0_values = np.arange(f0_min, f0_max+1e-2, 1e-2)\nf0_bins = get_f0_bins(f0_min=f0_bin_min, f0_max=f0_bin_max, binwidth_in_octaves=binwidth_in_octaves)\nf0_labels = f0_to_label(f0_values, f0_bins, right=False)\n# Slightly hacky way to determine the correct value of f0_max to ensure all bins are equally wide\nf0_min_label = np.squeeze(np.argwhere(f0_bins >= f0_min))[0]\nf0_max_label = np.squeeze(np.argwhere(f0_bins < f0_max))[-1] + 1\nf0_min_label, f0_max_label\n",
"_____no_output_____"
],
[
"import sys\nimport os\nimport h5py\nimport glob\nimport numpy as np\nimport scipy.signal\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport IPython.display as ipd\n\nsys.path.append('/packages/msutil')\nimport util_stimuli\nimport util_misc\nimport util_figures\n\n\nregex_fn = '/om/scratch/*/msaddler/data_pitchnet/PND_v08/noise_TLAS_snr_neg10pos10/PND_sr32000_v08_*.hdf5'\n\nlist_fn = sorted(glob.glob(regex_fn))\nfn = list_fn[-1]\n\nwith h5py.File(fn, 'r') as f:\n sr = f['sr'][0]\n IDX = np.random.randint(0, f['nopad_f0_mean'].shape[0])\n f0 = f['nopad_f0_mean'][IDX]\n y_fg = util_stimuli.set_dBSPL(f['stimuli/signal'][IDX], 60.0)\n y_bg = util_stimuli.set_dBSPL(f['stimuli/noise'][IDX], 60.0)\n\n\nfxx, pxx = util_stimuli.power_spectrum(y_fg, sr)\nfenv, penv = util_stimuli.get_spectral_envelope_lp(y_fg, sr, M=6)\npenv = penv - penv.max() + pxx.max()\n\nprint(util_stimuli.get_dBSPL(y_fg))\nprint(util_stimuli.get_dBSPL(y_bg))\n\nfig, ax = plt.subplots(nrows=1, ncols=1, figsize=(8, 2.5))\nax.plot(fxx, pxx, color='k', lw=1.0)\nax.plot(fenv, penv, color='r', lw=1.0)\nax = util_figures.format_axes(ax,\n xscale='linear',\n str_xlabel='Frequency (Hz)',\n str_ylabel='Power (dB SPL)',\n xlimits=[40, sr/2],\n ylimits=None,\n spines_to_hide=['right', 'top'])\nplt.show()\n\nipd.display(ipd.Audio(y_fg, rate=sr))\n",
"_____no_output_____"
],
[
"y = y_fg\n\nt = np.arange(0, len(y)) / sr\nx = np.zeros_like(y)\nfor f in np.arange(f0, sr/2, f0):\n x = x + np.sin(2*np.pi*f*t)\n# x = np.random.randn(*t.shape)\n\nb_lp, a_lp = util_stimuli.get_spectral_envelope_lp_coefficients(y, M=6)\nx = scipy.signal.lfilter(b_lp, a_lp, x)\n\n\nx = util_stimuli.set_dBSPL(x, 60.0)\n\nfyy, pyy = util_stimuli.power_spectrum(y, sr)\nfxx, pxx = util_stimuli.power_spectrum(x, sr)\n\nfig, ax = plt.subplots(nrows=1, ncols=1, figsize=(8, 2.5))\nax.plot(fyy, pyy, color='k', lw=1.0)\nax.plot(fxx, pxx, color='r', lw=1.0)\n\nax = util_figures.format_axes(ax,\n xscale='linear',\n str_xlabel='Frequency (Hz)',\n str_ylabel='Power (dB SPL)',\n xlimits=[40, sr/2],\n ylimits=None,\n spines_to_hide=['right', 'top'])\nplt.show()\n\nplt.figure(figsize=(8, 1.5))\nplt.plot(t, y, color='k')\nplt.plot(t, x, color='r')\nplt.show()\n\nipd.display(ipd.Audio(y, rate=sr))\nipd.display(ipd.Audio(x, rate=sr))\n",
"_____no_output_____"
],
[
"import sys\nimport os\nimport h5py\nimport json\nimport glob\nimport numpy as np\nimport scipy.signal\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport IPython.display as ipd\nsys.path.append('/packages/msutil')\nimport util_stimuli\nimport util_misc\nimport util_figures\n\n\n# regex_fn = '/om/scratch/*/msaddler/data_pitchnet/PND_v08/noise_TLAS_snr_neg10pos10_filter_signalLPv01/SPECTRAL_STATISTICS_v00/*.hdf5'\n# regex_fn = '/om/scratch/*/msaddler/data_pitchnet/PND_v08/noise_TLAS_snr_neg10pos10/SPECTRAL_STATISTICS_v00/*.hdf5'\n# regex_fn = '/om/scratch/*/msaddler/data_pitchnet/PND_mfcc/matchedPNDv08_snr_neg10pos10_phase0/SPECTRAL_STATISTICS_v00/*.hdf5'\nregex_fn = '/om/scratch/*/msaddler/data_pitchnet/PND_mfcc/PNDv08negated12_TLASmatched12_snr_neg10pos10_phase3/SPECTRAL_STATISTICS_v00/*.hdf5'\nlist_fn = sorted(glob.glob(regex_fn))\n\nlist_key = ['stimuli/signal', 'stimuli/noise']\n\ndict_mfcc = {key: [] for key in list_key}\ndict_mean_spectra = {}\n\nfor itr_fn, fn in enumerate(list_fn):\n with h5py.File(fn, 'r') as f:\n sr = f['sr'][0]\n freqs = f['freqs'][0]\n nopad_start = f['nopad_start'][0]\n nopad_end = f['nopad_end'][0]\n for key in list_key:\n if itr_fn == 0:\n dict_mean_spectra[key] = {\n 'freqs': freqs,\n 'summed_power_spectrum': np.zeros_like(freqs),\n 'count': 0,\n 'nfft': nopad_end - nopad_start,\n }\n nrows = f[key + '_power_spectrum'].shape[0]\n nrows_steps = np.linspace(0, nrows, 2, dtype=int)\n for nrow_start, nrow_end in zip(nrows_steps[:-1], nrows_steps[1:]):\n all_spectra = f[key + '_power_spectrum'][nrow_start:nrow_end]\n# TRUNCATE = -20\n# all_spectra[all_spectra < TRUNCATE] = TRUNCATE\n IDX = np.isfinite(np.sum(all_spectra, axis=1))\n dict_mean_spectra[key]['summed_power_spectrum'] += np.sum(all_spectra[IDX], axis=0)\n dict_mean_spectra[key]['count'] += np.sum(IDX, axis=0)\n \n dict_mfcc[key].append(f[key + '_mfcc'][:])\n if itr_fn % 5 == 0:\n print(itr_fn, os.path.basename(fn), dict_mean_spectra[key]['count'])\n\nfor key in list_key:\n print('concatenating {} mfcc arrays'.format(key))\n dict_mfcc[key] = np.concatenate(dict_mfcc[key], axis=0)\n",
"_____no_output_____"
],
[
"results_dict = {}\nfor key in sorted(dict_mfcc.keys()):\n mfcc_cov = np.cov(dict_mfcc[key], rowvar=False)\n mfcc_mean = np.mean(dict_mfcc[key], axis=0)\n results_dict[key] = {\n 'mfcc_mean': mfcc_mean,\n 'mfcc_cov': mfcc_cov,\n 'sr': sr,\n 'mean_power_spectrum': dict_mean_spectra[key]['summed_power_spectrum'] / dict_mean_spectra[key]['count'],\n 'mean_power_spectrum_freqs': dict_mean_spectra[key]['freqs'],\n 'mean_power_spectrum_count': dict_mean_spectra[key]['count'],\n 'mean_power_spectrum_n_fft': dict_mean_spectra[key]['nfft'],\n }\n \n# results_dict[key]['mean_power_spectrum'] = 10*np.log10(results_dict[key]['mean_power_spectrum'])\n print(results_dict[key]['mean_power_spectrum'].max())\n print(results_dict[key]['mean_power_spectrum'].min())\nfn_results_dict = os.path.join(os.path.dirname(fn), 'results_dict.json')\nwith open(fn_results_dict, 'w') as f:\n json.dump(results_dict, f, sort_keys=True, cls=util_misc.NumpyEncoder)\nprint(fn_results_dict)\n\nfor k0 in sorted(results_dict.keys()):\n for k1 in sorted(results_dict[k0].keys()):\n val = np.array(results_dict[k0][k1])\n if len(val.reshape([-1])) > 10:\n print(k0, k1, val.shape)\n else:\n print(k0, k1, val)\n\n# nvars = dict_mfcc[key].shape[1]\n# ncols = 4\n# nrows = int(np.ceil(nvars/ncols))\n# fig, ax = plt.subplots(ncols=ncols,\n# nrows=nrows,\n# figsize=(3*ncols, 2*nrows))\n# ax = ax.reshape([-1])\n\n# for ax_idx in range(nvars):\n# bins = 100#np.linspace(-2.5, 2.5, 100)\n# for key in sorted(results_dict.keys()):\n# vals = dict_mfcc[key][:, ax_idx]\n# ax[ax_idx].hist(vals, bins=bins, alpha=0.5)\n# ax[ax_idx] = util_figures.format_axes(ax[ax_idx],\n# str_xlabel='mfcc {}'.format(ax_idx + 1),\n# str_ylabel='Count',\n# xlimits=None,\n# ylimits=None)\n \n# for ax_idx in range(nvars, ax.shape[0]):\n# ax[ax_idx].axis('off')\n\n# plt.tight_layout()\n# plt.show()\n",
"_____no_output_____"
],
[
"import sys\nimport os\nimport h5py\nimport json\nimport glob\nimport numpy as np\nimport librosa\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport IPython.display as ipd\n\nsys.path.append('/packages/msutil')\nimport util_stimuli\nimport util_misc\nimport util_figures\n\ndata_dir = '/om/scratch/Fri/msaddler/data_pitchnet/'\nbasename = 'SPECTRAL_STATISTICS_v00/results_dict_v00.json'\nlist_dataset_tag = [\n ('PND_v08/noise_TLAS_snr_neg10pos10', 'Natural sounds'),\n# 'PND_v08/noise_TLAS_snr_neg10pos10_filter_signalLPv01', 'Natural sounds (lowpass)'),\n# 'PND_v08/noise_TLAS_snr_neg10pos10_filter_signalHPv00', 'Natural sounds (highpass)'),\n# 'PND_v08inst/noise_TLAS_snr_neg10pos10',\n# 'PND_v08spch/noise_TLAS_snr_neg10pos10',\n ('PND_mfcc/PNDv08matched12_TLASmatched12_snr_neg10pos10_phase0', 'Synthetic (12-MFCC-matched to natural)'),\n# 'PND_mfcc/negatedPNDv08_snr_neg10pos10_phase0',\n# ('PND_mfcc/debug', 'Synthetic (flat spectrum)'),\n# ('PND_mfcc/PNDv08matched12_TLASmatched12_snr_neg10pos10_phase3', 'Synthetic (12-MFCC-matched)'),\n# ('PND_mfcc/PNDv08negated12_TLASmatched12_snr_neg10pos10_phase3', 'Synthetic (12-MFCC-negated)'),\n]\n\nfig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10, 7.5))\n\nclist = 'krbmgyc'\nfor cidx, (dataset_tag, label_tag) in enumerate(list_dataset_tag):\n fn_results_dict = os.path.join(data_dir, dataset_tag, basename)\n with open(fn_results_dict, 'r') as f:\n results_dict = json.load(f)\n \n for key in sorted(results_dict.keys()):\n MEAN_FXX = np.array(results_dict[key]['mean_power_spectrum_freqs'])\n MEAN_PXX = np.array(results_dict[key]['mean_power_spectrum_envelope'])\n# MEAN_PXX -= MEAN_PXX.max()\n \n# sr = np.array(results_dict[key]['sr'])\n# nfft = results_dict[key]['mean_power_spectrum_n_fft']\n# mfcc_mean = np.array(results_dict[key]['mfcc_mean'])\n# mfcc_mean[12:] = 0\n# M = librosa.filters.mel(sr, nfft, n_mels=len(mfcc_mean))\n# Minv = np.linalg.pinv(M)\n# power_spectrum = util_stimuli.get_power_spectrum_from_mfcc(mfcc_mean, Minv) \n# MEAN_FXX = np.fft.rfftfreq(nfft, d=1/sr)\n# MEAN_PXX = 10*np.log10(power_spectrum)\n \n color = clist[cidx]\n ls = '-'\n if 'noise' in key:\n ls = '--'\n ax.plot(MEAN_FXX,\n MEAN_PXX,\n label='{} : {}'.format(label_tag, key),\n lw=2.5,\n color=color,\n ls=ls)\n \n MEAN_MFCC = np.array(results_dict[key]['mean_power_spectrum_freqs'])\n\n ax.legend(loc='lower left')\n ax = util_figures.format_axes(ax,\n xscale='log',\n str_xlabel='Frequency (Hz)',\n str_ylabel='Power (dB)',\n xlimits=[40, None],\n ylimits=[-40, None],\n spines_to_hide=['right', 'top'])\nplt.show()\n",
"_____no_output_____"
],
[
"import sys\nimport os\nimport h5py\nimport json\nimport glob\nimport copy\nimport pdb\nimport numpy as np\nimport scipy.signal\nimport librosa\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport IPython.display as ipd\n\nsys.path.append('/packages/msutil')\nimport util_stimuli\nimport util_misc\nimport util_figures\n\nfn_results_dict = '/om/scratch/Mon/msaddler/data_pitchnet/PND_v08/noise_TLAS_snr_neg10pos10/SPECTRAL_STATISTICS_v00/results_dict.json'\nwith open(fn_results_dict, 'r') as f:\n results_dict = json.load(f)\n\nN = 1\nfig, ax = plt.subplots(nrows=1, ncols=1, figsize=(7.5, 5.0))\nfor itrN in range(N):\n for key in sorted(results_dict.keys())[1:]:\n mfcc_mean = np.array(results_dict[key]['mfcc_mean'])\n mfcc_cov = np.array(results_dict[key]['mfcc_cov'])\n sr = results_dict[key]['sr']\n dur = 0.150\n\n nfft = int(dur*sr)\n M = librosa.filters.mel(sr, nfft, n_mels=len(mfcc_mean))\n Minv = np.linalg.pinv(M)\n \n mfcc = np.random.multivariate_normal(mfcc_mean, mfcc_cov)\n mfcc[0:] = 0\n power_spectrum = util_stimuli.get_power_spectrum_from_mfcc(mfcc, Minv)\n power_spectrum_freqs = np.fft.rfftfreq(nfft, d=1/sr)\n \n f0 = 250.0\n frequencies = np.arange(f0, sr/2, f0)\n amplitudes = np.interp(frequencies,\n power_spectrum_freqs, \n np.sqrt(power_spectrum))\n signal = util_stimuli.complex_tone(f0,\n sr,\n dur,\n harmonic_numbers=None,\n frequencies=frequencies,\n amplitudes=amplitudes,\n phase_mode='sine',\n offset_start=True,\n strict_nyquist=True)\n# signal = util_stimuli.impose_power_spectrum(signal, power_spectrum)\n if 'noise' in key:\n signal = np.random.randn(nfft)\n signal = util_stimuli.impose_power_spectrum(signal, power_spectrum)\n \n kwargs_plot = {\n 'ls': '-',\n 'color': 'b',\n }\n if 'noise' in key:\n kwargs_plot['ls'] = '-'\n kwargs_plot['color'] = 'k'\n \n fxx, pxx = util_stimuli.power_spectrum(signal, sr)\n ax.plot(fxx, pxx-pxx.max(), lw=0.25, color='m')\n \n power_spectrum = 10*np.log10(power_spectrum)\n ax.plot(fxx, power_spectrum-power_spectrum.max(), lw=0.5, **kwargs_plot)\n\n MEAN_FXX = np.array(results_dict[key]['mean_power_spectrum_freqs'])\n MEAN_PXX = np.array(results_dict[key]['mean_power_spectrum'])\n# MEAN_FXX = np.fft.rfftfreq(nfft, d=1/sr)\n# MEAN_PXX = util_stimuli.get_power_spectrum_from_mfcc(mfcc_mean, Minv)\n# MEAN_PXX = 10*np.log10(MEAN_PXX)\n \n ax.plot(MEAN_FXX, MEAN_PXX-MEAN_PXX.max(), lw=2.5, **kwargs_plot)\n \n# mfcc2 = util_stimuli.get_mfcc(signal, M)\n# mfcc2[12:] = 0\n# pxx2 = util_stimuli.get_power_spectrum_from_mfcc(mfcc2, Minv)\n# pxx2 = 10*np.log10(pxx2)\n# fxx2 = np.fft.rfftfreq(len(signal), d=1/sr)\n# ax.plot(fxx2, pxx2-pxx2.max(), color='g')\n\n# Use this function to specify axis scaling, limits, labels, etc.\nax = util_figures.format_axes(ax,\n xscale='linear',\n str_xlabel='Frequency (Hz)',\n str_ylabel='Power (dB)',\n xlimits=[40, None],\n ylimits=[-80, None],\n spines_to_hide=['right', 'top'])\nplt.show()\n\nipd.display(ipd.Audio(signal, rate=sr))\n",
"_____no_output_____"
],
[
"import stimuli_generate_random_synthetic_tones\nimport importlib\nimportlib.reload(stimuli_generate_random_synthetic_tones)\n\n\nspectral_statistics_filename = '/om/scratch/Mon/msaddler/data_pitchnet/PND_v08/noise_TLAS_snr_neg10pos10/SPECTRAL_STATISTICS_v00/results_dict.json'\n\nstimuli_generate_random_synthetic_tones.spectrally_shaped_synthetic_dataset(\n 'tmp.hdf5',\n 500,\n spectral_statistics_filename,\n fs=32e3,\n dur=0.150,\n phase_modes=['cos'],\n range_f0=[80.0, 1001.3713909809752],\n range_snr=[-10., 10.],\n range_dbspl=[30., 90.],\n n_mfcc=12,\n invert_signal_filter=0,\n invert_noise_filter=False,\n generate_signal_in_fft_domain=False,\n out_combined_key='stimuli/signal_in_noise',\n out_signal_key='stimuli/signal',\n out_noise_key='stimuli/noise',\n out_snr_key='snr',\n out_augmentation_prefix='augmentation/',\n random_seed=0,\n disp_step=50)\n",
"_____no_output_____"
],
[
"import sys\nimport os\nimport h5py\nimport json\nimport glob\nimport copy\nimport pdb\nimport librosa\nimport numpy as np\nimport scipy.signal\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport IPython.display as ipd\n\nsys.path.append('/packages/msutil')\nimport util_stimuli\nimport util_misc\nimport util_figures\n\nkey_list = ['stimuli/signal_in_noise']#, 'stimuli/signal', 'stimuli/noise']\nwith h5py.File('tmp.hdf5', 'r') as f:\n sr = f['sr'][0]\n y = {}\n for k in key_list:\n y[k] = f[k][np.random.randint(500)]\n for k in util_misc.get_hdf5_dataset_key_list(f):\n print(k, f[k])\n\nipd.display(ipd.Audio(y['stimuli/signal_in_noise'], rate=sr))\n\nfig, ax = plt.subplots(nrows=1, ncols=1, figsize=(7.5, 5.0))\nfor k in key_list:\n fxx, pxx = util_stimuli.power_spectrum(y[k], sr)\n ax.plot(fxx, pxx-pxx.max(), lw=0.5, label=k)\n\nax.legend()\nax = util_figures.format_axes(ax,\n xscale='linear',\n str_xlabel='Frequency (Hz)',\n str_ylabel='Power (dB SPL)',\n xlimits=[40, None],\n ylimits=[-60, None],\n spines_to_hide=['right', 'top'])\nplt.show()\n",
"_____no_output_____"
],
[
"import sys\nimport os\nimport h5py\nimport glob\nimport numpy as np\nimport scipy.signal\nimport scipy.fftpack\nimport librosa\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport IPython.display as ipd\n\nsys.path.append('/packages/msutil')\nimport util_stimuli\nimport util_misc\nimport util_figures\n\n\nregex_fn = '/om/scratch/*/msaddler/data_pitchnet/PND_v08/noise_TLAS_snr_neg10pos10/PND_sr32000_v08_*.hdf5'\nregex_fn = '/om/scratch/*/msaddler/data_pitchnet/PND_mfcc/PNDv08PYSmatched12_TLASmatched12_snr_neg10pos10_phase3/stim_0000000-0002100.hdf5'\n\nlist_fn = sorted(glob.glob(regex_fn))\nfn = list_fn[-1]\n\nkey_y = 'stimuli/signal'\nkey_f0 = 'nopad_f0_mean'\nkey_f0 = 'f0'\n\nwith h5py.File(fn, 'r') as f:\n sr = f['sr'][0]\n IDX = np.random.randint(0, f[key_f0].shape[0])\n f0 = f[key_f0][IDX]\n y = util_stimuli.set_dBSPL(f[key_y][IDX], 60.0)\n\n\nfxx, pxx = util_stimuli.power_spectrum(y, sr)\n\nprint(f0)\nharmonic_frequencies = np.arange(f0, sr/2, f0)\nIDX = np.digitize(harmonic_frequencies, fxx)\nharmonic_freq_bins = fxx[IDX]\nspectrum_freq_bins = pxx[IDX]\nenvelope_spectrum = np.interp(fxx, harmonic_freq_bins, spectrum_freq_bins)\n\n# philbert = np.abs(scipy.signal.hilbert(pxx+50))\n\nfenv, penv = util_stimuli.get_spectral_envelope_lp(y, sr, M=12)\n\nfig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 5))\nax.plot(fxx, pxx, color='k', lw=1.0)\nax.plot(fxx, envelope_spectrum, color='r', lw=2.0)\n# ax.plot(fenv, penv, color='r', lw=1.0)\nax = util_figures.format_axes(ax,\n xscale='linear',\n str_xlabel='Frequency (Hz)',\n str_ylabel='Power (dB SPL)',\n xlimits=[40, sr/2],\n ylimits=[-20, None],\n spines_to_hide=['right', 'top'])\nplt.show()\n\nipd.display(ipd.Audio(y, rate=sr))\n",
"_____no_output_____"
],
[
"# import sys\n# import os\n# import h5py\n# import json\n# import glob\n# import numpy as np\n# import scipy.signal\n# %matplotlib inline\n# import matplotlib.pyplot as plt\n# import IPython.display as ipd\n# sys.path.append('/packages/msutil')\n# import util_stimuli\n# import util_misc\n# import util_figures\n\nimport importlib\nimport stimuli_compute_statistics\nimportlib.reload(stimuli_compute_statistics)\n\nimport stimuli_analyze_pystraight\nimportlib.reload(stimuli_analyze_pystraight)\n\n# regex_fn = '/om/scratch/Fri/msaddler/data_pitchnet/PND_v08/noise_TLAS_snr_neg10pos10/PYSTRAIGHT_v01_foreground/PND*.hdf5'\nregex_fn = '/om/scratch/Fri/msaddler/data_pitchnet/PND_mfcc/PNDv08PYSnegated12_TLASmatched12_snr_neg10pos10_phase3/PYSTRAIGHT_v01_foreground/*.hdf5'\nprint(regex_fn)\nstimuli_analyze_pystraight.summarize_pystraight_statistics(\n regex_fn,\n fn_results='results_dict.json',\n key_sr='sr',\n key_signal_list=['stimuli/signal'])\n\n# regex_fn = '/om/scratch/Fri/msaddler/data_pitchnet/PND_v08/noise_TLAS_snr_neg10pos10/SPECTRAL_STATISTICS_v00/PND*.hdf5'\n# regex_fn = '/om/scratch/Fri/msaddler/data_pitchnet/PND_mfcc/PNDv08PYSmatched12_TLASmatched12_snr_neg10pos10_phase3/SPECTRAL_STATISTICS_v00/*.hdf5'\n# print(regex_fn)\n# stimuli_compute_statistics.summarize_spectral_statistics(regex_fn,\n# fn_results='results_dict.json',\n# key_sr='sr',\n# key_f0=None,\n# key_signal_list=['stimuli/signal', 'stimuli/noise'])\n",
"_____no_output_____"
],
[
"import sys\nimport os\nimport h5py\nimport glob\nimport numpy as np\nimport scipy.signal\nimport scipy.fftpack\nimport librosa\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport IPython.display as ipd\n\nsys.path.append('/packages/msutil')\nimport util_stimuli\nimport util_misc\nimport util_figures\n\n\nregex_fn = '/om/scratch/*/msaddler/data_pitchnet/PND_v08/noise_TLAS_snr_neg10pos10/PYSTRAIGHT_v01_foreground/*.hdf5'\n# regex_fn = '/om/scratch/*/msaddler/data_pitchnet/PND_mfcc/debug_PNDv08PYSmatched12_TLASmatched12_snr_neg10pos10_phase3/PYSTRAIGHT_v01_foreground/*.hdf5'\nlist_fn = sorted(glob.glob(regex_fn))\nfn = list_fn[0]\n\n\nkey_f0 = 'f0'\nkey_y = 'stimuli/signal_INTERP_interp_signal'\n\nwith h5py.File(fn, 'r') as f:\n for k in util_misc.get_hdf5_dataset_key_list(f):\n print(k, f[k].shape)\n sr = f['sr'][0]\n IDX = np.random.randint(0, f[key_y].shape[0])\n# IDX = 5\n y = util_stimuli.set_dBSPL(f[key_y][IDX], 60.0)\n if True:#key_f0 in f:\n f0 = f[key_f0][IDX]\n print('------------>', f0, f['pystraight_success'][IDX])\n \n NTMP = f[key_y].shape[1]\n power = 0\n while NTMP > 2:\n NTMP /= 2\n power += 1\n n_fft = int(2 ** power)\n M = librosa.filters.mel(sr, n_fft, n_mels=40)\n Minv = np.linalg.pinv(M)\n \n fxx = np.fft.rfftfreq(n_fft, d=1/sr)\n pxx = f['stimuli/signal_FILTER_spectrumSTRAIGHT'][IDX]\n mfcc = f['stimuli/signal_FILTER_spectrumSTRAIGHT_mfcc'][IDX]\n# mfcc = scipy.fftpack.dct(np.log(np.matmul(M, pxx)), norm='ortho')\n mfcc[12:] = 0\n pxx_mfcc = np.matmul(Minv, np.exp(scipy.fftpack.idct(mfcc, norm='ortho')))\n pxx_mfcc[pxx_mfcc < 0] = 0\n pxx_mfcc = 10*np.log10(pxx_mfcc)\n pxx_mfcc -= pxx_mfcc.max()\n \n pxx_straight = pxx\n pxx_straight = 10*np.log10(pxx_straight)\n pxx_straight -= pxx_straight.max()\n\nfyy, pyy = util_stimuli.power_spectrum(y, sr)\npyy -= pyy.max()\n\n\nfig, ax = plt.subplots(nrows=1, ncols=1, figsize=(12, 5))\nax.plot(fyy, pyy, color='k', lw=1.0)\nax.plot(fxx, pxx_straight, color='r', lw=2.0)\nax.plot(fxx, pxx_mfcc, color='g', lw=2.0)\n\nax = util_figures.format_axes(ax,\n xscale='linear',\n str_xlabel='Frequency (Hz)',\n str_ylabel='Power (dB SPL)',\n xlimits=[40, sr/2],\n# ylimits=[-100, 10],\n spines_to_hide=['right', 'top'])\nplt.show()\n\nipd.display(ipd.Audio(y, rate=sr))\n",
"_____no_output_____"
],
[
"import sys\nimport os\nimport h5py\nimport json\nimport glob\nimport numpy as np\nimport librosa\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport IPython.display as ipd\n\nsys.path.append('/packages/msutil')\nimport util_stimuli\nimport util_misc\nimport util_figures\n\ndata_dir = '/om/scratch/Fri/msaddler/data_pitchnet/'\nlist_dataset_tag = [\n# ('PND_v08/noise_TLAS_snr_neg10pos10/SPECTRAL_STATISTICS_v00/results_dict.json', 'Natural power spectrum'),\n ('PND_mfcc/debug_PNDv08PYSmatched12_TLASmatched12_snr_neg10pos10_phase3/SPECTRAL_STATISTICS_v00/results_dict.json', 'Synthetic power spectrum (12-MFCC-matched to natural)'),\n\n# ('PND_v08/noise_TLAS_snr_neg10pos10/PYSTRAIGHT_v01_foreground/results_dict.json', 'Natural filter spectrum'),\n# ('PND_mfcc/debug_PNDv08PYSmatched12_TLASmatched12_snr_neg10pos10_phase3/SPECTRAL_STATISTICS_v00/results_dict.json', 'Synthetic filter spectrum (12-MFCC-matched to natural)'),\n# ('PND_mfcc/debug_PNDv08PYSnegated12_TLASmatched12_snr_neg10pos10_phase3/SPECTRAL_STATISTICS_v00/results_dict.json', 'Synthetic filter spectrum (12-MFCC-matched to natural)'),\n\n# ('PND_v08/noise_TLAS_snr_neg10pos10_filter_signalLPv01/PYSTRAIGHT_v01_foreground/results_dict.json', 'Natural foreground (lowpass-filtered)'),\n# ('PND_v08/noise_TLAS_snr_neg10pos10_filter_signalHPv00/PYSTRAIGHT_v01_foreground/results_dict.json', 'Natural foreground (highpass-filtered)'),\n\n# ('PND_v08/noise_TLAS_snr_neg10pos10/SPECTRAL_STATISTICS_v00/results_dict.json', 'Natural'),\n# ('PND_v08/noise_TLAS_snr_neg10pos10_filter_signalLPv00/SPECTRAL_STATISTICS_v00/results_dict.json', 'Natural (lowpass_v00)'),\n# ('PND_v08/noise_TLAS_snr_neg10pos10_filter_signalLPv01/SPECTRAL_STATISTICS_v00/results_dict.json', 'Natural (lowpass_v01)'),\n# ('PND_v08/noise_TLAS_snr_neg10pos10_filter_signalHPv00/SPECTRAL_STATISTICS_v00/results_dict.json', 'Natural (highpass_v00)'),\n# ('PND_v08/noise_TLAS_snr_neg10pos10/PYSTRAIGHT_v01_foreground/results_dict.json', 'Natural'),\n# ('PND_v08/noise_TLAS_snr_neg10pos10_filter_signalLPv00/PYSTRAIGHT_v01_foreground/results_dict.json', 'Natural (lowpass_v00)'),\n# ('PND_v08/noise_TLAS_snr_neg10pos10_filter_signalLPv01/PYSTRAIGHT_v01_foreground/results_dict.json', 'Natural (lowpass_v01)'),\n# ('PND_v08/noise_TLAS_snr_neg10pos10_filter_signalHPv00/PYSTRAIGHT_v01_foreground/results_dict.json', 'Natural (highpass_v00)'),\n\n# ('PND_mfcc/PNDv08matched12_TLASmatched12_snr_neg10pos10_phase0/SPECTRAL_STATISTICS_v00/results_dict.json', 'Synthetic foreground (12-MFCC-matched to natural)'),\n# ('PND_mfcc/debugPNDv08negated12_TLASmatched12_snr_neg10pos10_phase0/SPECTRAL_STATISTICS_v00/results_dict.json', 'Synthetic foreground (12-MFCC-negated to natural)'),\n# ('PND_v08spch/noise_TLAS_snr_neg10pos10/PYSTRAIGHT_v01_foreground/results_dict.json', 'Natural speech'),\n# ('PND_v08inst/noise_TLAS_snr_neg10pos10/PYSTRAIGHT_v01_foreground/results_dict.json', 'Natural instruments'),\n]\n\nfig, ax = plt.subplots(nrows=1, ncols=1, figsize=(8, 4))\n\nclist = 'krbgmyc'\nfor cidx, (dataset_tag, label_tag) in enumerate(list_dataset_tag):\n fn_results_dict = os.path.join(data_dir, dataset_tag)\n print(fn_results_dict)\n with open(fn_results_dict, 'r') as f:\n results_dict = json.load(f)\n \n if 'PYSTRAIGHT_v01_foreground' in fn_results_dict:\n key_fxx = 'mean_filter_spectrum_freqs'\n key_pxx = 'mean_filter_spectrum'\n key_n_fft = 'mean_filter_spectrum_n_fft'\n else:\n key_fxx = 'mean_power_spectrum_freqs'\n key_pxx = 'mean_power_spectrum'\n key_n_fft = 'mean_power_spectrum_n_fft'\n \n for key in sorted(results_dict.keys()):\n MEAN_FXX = np.array(results_dict[key][key_fxx])\n MEAN_PXX = np.array(results_dict[key][key_pxx])\n if 'PYSTRAIGHT_v01_foreground' in fn_results_dict:\n MEAN_PXX -= 10*np.log10(20e-6)\n \n sr = results_dict[key]['sr']\n mfcc_mean = np.array(results_dict[key]['mfcc_mean'])\n mfcc_mean[12:0]\n mfcc_cov = np.array(results_dict[key]['mfcc_cov'])\n n_fft = np.array(results_dict[key][key_n_fft])\n M = librosa.filters.mel(sr, n_fft, n_mels=mfcc_mean.shape[0])\n Minv = np.linalg.pinv(M)\n \n kwargs_plot = {\n 'ls': '-',\n 'color': clist[cidx],\n 'label': '{} : {}'.format(label_tag, key),\n }\n if 'noise' in key:\n kwargs_plot['ls'] = '--'\n kwargs_plot['color'] = [0.5] * 3\n kwargs_plot['label'] = None\n \n ax.plot(MEAN_FXX,\n MEAN_PXX,#-MEAN_PXX.max(),\n **kwargs_plot)\n \n# PXX_MFCC = 10*np.log10(util_stimuli.get_power_spectrum_from_mfcc(mfcc_mean, Minv))\n# ax.plot(MEAN_FXX,\n# PXX_MFCC,#-PXX_MFCC.max(),\n# **kwargs_plot)\n \n# sample_PXX_MFCC = np.zeros_like(PXX_MFCC)\n# nsamples=50\n# for _ in range(nsamples):\n# mfcc = np.random.multivariate_normal(mfcc_mean, mfcc_cov)\n# mfcc[6:] = 0\n# sample_PXX_MFCC += 10*np.log10(util_stimuli.get_power_spectrum_from_mfcc(mfcc, Minv))\n# sample_PXX_MFCC /= nsamples\n# ax.plot(MEAN_FXX,\n# sample_PXX_MFCC-sample_PXX_MFCC.max(),\n# **kwargs_plot)\n \nax.legend(loc='upper right', ncol=1)\nax = util_figures.format_axes(ax,\n xscale='log',\n str_xlabel='Frequency (Hz)',\n str_ylabel='Power (dB)',\n xlimits=[40, None],\n ylimits=[-20, None],\n spines_to_hide=['right', 'top'])\nplt.show()\n",
"_____no_output_____"
],
[
"import sys\nimport h5py\nimport numpy as np\nimport IPython.display as ipd\nsys.path.append('/packages/msutil')\nimport util_misc\nimport util_stimuli\n\n\n# fn = '/om/user/msaddler/data_pitchnet/neurophysiology/bernox2005_SlidingFixedFilter_lharm01to30_phase0_f0min080_f0max320/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/bez2018meanrates_009216-012288.hdf5'\n# fn = '/om/scratch/Fri/msaddler/data_pitchnet/PND_v08/noise_TLAS_snr_neg10pos10/sr20000_cf100_species002_spont070_BW10eN1_IHC3000Hz_IHC7order/bez2018meanrates_098_007000-014000.hdf5'\n# fn = '/om/scratch/Fri/msaddler/data_pitchnet/PND_v08inst/noise_TLAS_snr_neg10pos10/PND_sr32000_v08inst_1422630-1437000.hdf5'\nfn = '/om/scratch/Fri/msaddler/data_pitchnet/PND_v08spch/noise_TLAS_snr_neg10pos10/PND_sr32000_v08spch_1422630-1437000.hdf5'\n# fn_new = 'PND_v08inst_examples_for_metamers.hdf5'\nfn_new = 'PND_v08spch_examples_for_metamers.hdf5'\n\nnp.random.seed(998)\n\ndata_dict = {}\nwith h5py.File(fn, 'r') as f:\n for k in util_misc.get_hdf5_dataset_key_list(f):\n# print(k, f[k])\n if f[k].shape[0] == 1:\n data_dict[k] = f[k][:]\n\n sr = f['sr'][0]\n \n key_signal = 'stimuli/signal'\n N = 15\n for itrN in range(N):\n IDX = np.random.randint(low=0, high=f['nopad_f0_mean'].shape[0])\n for k in util_misc.get_hdf5_dataset_key_list(f):\n if f[k].shape[0] > 1:\n if k not in data_dict:\n data_dict[k] = []\n data_dict[k].append(f[k][IDX])\n \n idx_start = f['nopad_start_index'][IDX] - f['segment_start_index'][IDX]\n idx_end = f['nopad_end_index'][IDX] - f['segment_start_index'][IDX]\n y = f['stimuli/signal'][IDX, idx_start:idx_end]\n y_preprocessed = y[0:int(0.05*sr)]\n y_preprocessed = util_stimuli.set_dBSPL(y_preprocessed, 60.0)\n \n if itrN == 0:\n data_dict['y'] = []\n data_dict['y_preprocessed'] = []\n data_dict['f0'] = []\n data_dict['y'].append(y)\n data_dict['y_preprocessed'].append(y_preprocessed)\n data_dict['f0'].append(f['nopad_f0_mean'][IDX])\n \n# ipd.display(ipd.Audio(y_preprocessed, rate=sr))\n\n# f_new = h5py.File(fn_new, 'w')\n# for k in sorted(data_dict.keys()):\n# data_dict[k] = np.array(data_dict[k])\n# # print(k, data_dict[k].shape, data_dict[k].dtype)\n# f_new.create_dataset(k, data=data_dict[k])\n# f_new.close()\n\nprint('F0:', data_dict['f0'])\n",
"_____no_output_____"
],
[
"import sys\nimport h5py\nimport numpy as np\nimport IPython.display as ipd\nsys.path.append('/packages/msutil')\nimport util_misc\nimport util_stimuli\n\nfn = '/om4/group/mcdermott/user/msaddler/pitchnet_dataset/pitchnetDataset/assets/data/interim/swcDataframe_interim_processed2019-07-15-1830_processedFile__noNaN_sr32000.pdh5'\nfn = '/om4/group/mcdermott/user/msaddler/pitchnet_dataset/pitchnetDataset/assets/data/interim/sr32000_pystraight/swc_183928-184492.hdf5'\nwith h5py.File(fn) as f:\n for k in util_misc.get_hdf5_dataset_key_list(f):\n print(k, f[k])\n \n for _ in range(10):\n IDX = np.random.randint(f['interp_signal'].shape[0])\n y = f['interp_signal'][IDX]\n sr = f['sr'][0]\n\n ipd.display(ipd.Audio(y, rate=sr))\n",
"_____no_output_____"
],
[
"import sys\nimport h5py\nimport numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport IPython.display as ipd\nsys.path.append('/packages/msutil')\nimport util_misc\nimport util_stimuli\n\n# fn = '/om/user/msaddler/data_pitchnet/bernox2005/lowharm_v01/stim.hdf5'\nfn = '/om/user/msaddler/data_pitchnet/bernox2005/neurophysiology_v01_EqualAmpTEN_lharm01to15_phase0_f0min080_f0max640/stim.hdf5'\nfn = '/om/user/msaddler/data_pitchnet/bernox2005/neurophysiology_v01_EqualAmpTEN_lharm01to30_phase0_f0min080_f0max320/stim.hdf5'\nwith h5py.File(fn, 'r') as f:\n for k in util_misc.get_hdf5_dataset_key_list(f):\n print(k, f[k])\n \n print(np.unique(f['max_audible_harm'][:]))\n base_f0 = f['base_f0'][:]\n print(np.unique(base_f0).shape, base_f0.min(), base_f0.max())\n \n IDX = -10000\n sr = f['config_tone/fs'][0]\n y = f['tone_in_noise'][IDX]\n \n print(util_stimuli.get_dBSPL(y))\n fxx, pxx = util_stimuli.power_spectrum(y, sr)\n \n fig, ax = plt.subplots(figsize=(12, 2))\n ax.plot(fxx, pxx)\n ax.set_xlim([0, sr/2])\n ax.set_ylim([-30, None])\n plt.show()\n \n ipd.display(ipd.Audio(y, rate=sr))\n",
"_____no_output_____"
],
[
"import importlib\nimport stimuli_generate_BernsteinOxenhamPureTone\nimportlib.reload(stimuli_generate_BernsteinOxenhamPureTone)\n\n# hdf5_filename = '/om/user/msaddler/data_pitchnet/bernox2005/puretone_v01/stim.hdf5'\n# stimuli_generate_BernsteinOxenhamPureTone.main(\n# hdf5_filename,\n# fs=32e3,\n# dur=0.150,\n# f0_min=80.0,\n# f0_max=10240.0,\n# f0_n=50,\n# dbspl_min=20.0,\n# dbspl_max=60.0,\n# dbspl_step=0.25,\n# noise_dBHzSPL=10.0,\n# noise_attenuation_start=600.0,\n# noise_attenuation_slope=2,\n# disp_step=100)\n\n# hdf5_filename = '/om/user/msaddler/data_pitchnet/bernox2005/puretone_v02/stim.hdf5'\n# stimuli_generate_BernsteinOxenhamPureTone.main(\n# hdf5_filename,\n# fs=32e3,\n# dur=0.150,\n# f0_min=80.0,\n# f0_max=10240.0,\n# f0_n=50,\n# dbspl_min=20.0,\n# dbspl_max=60.0,\n# dbspl_step=0.25,\n# noise_dBHzSPL=12.0,\n# noise_attenuation_start=600.0,\n# noise_attenuation_slope=2,\n# disp_step=100)\n\n# hdf5_filename = '/om/user/msaddler/data_pitchnet/bernox2005/puretone_v03/stim.hdf5'\n# stimuli_generate_BernsteinOxenhamPureTone.main(\n# hdf5_filename,\n# fs=32e3,\n# dur=0.150,\n# f0_min=80.0,\n# f0_max=10240.0,\n# f0_n=50,\n# dbspl_min=20.0,\n# dbspl_max=60.0,\n# dbspl_step=0.25,\n# noise_dBHzSPL=8.0,\n# noise_attenuation_start=600.0,\n# noise_attenuation_slope=2,\n# disp_step=100)\n",
"ImportError in `dataset_util.py` No module named 'pyfftw'\n[INITIALIZING]: /om/user/msaddler/data_pitchnet/bernox2005/puretone_v01/stim.hdf5\n... signal 000000 of 008050, f0=80.00, dbspl=20.00\n... signal 000100 of 008050, f0=80.00, dbspl=45.00\n... signal 000200 of 008050, f0=88.33, dbspl=29.75\n... signal 000300 of 008050, f0=88.33, dbspl=54.75\n... signal 000400 of 008050, f0=97.52, dbspl=39.50\n... signal 000500 of 008050, f0=107.67, dbspl=24.25\n... signal 000600 of 008050, f0=107.67, dbspl=49.25\n... signal 000700 of 008050, f0=118.88, dbspl=34.00\n... signal 000800 of 008050, f0=118.88, dbspl=59.00\n... signal 000900 of 008050, f0=131.25, dbspl=43.75\n... signal 001000 of 008050, f0=144.92, dbspl=28.50\n... signal 001100 of 008050, f0=144.92, dbspl=53.50\n... signal 001200 of 008050, f0=160.00, dbspl=38.25\n... signal 001300 of 008050, f0=176.65, dbspl=23.00\n... signal 001400 of 008050, f0=176.65, dbspl=48.00\n... signal 001500 of 008050, f0=195.04, dbspl=32.75\n... signal 001600 of 008050, f0=195.04, dbspl=57.75\n... signal 001700 of 008050, f0=215.34, dbspl=42.50\n... signal 001800 of 008050, f0=237.76, dbspl=27.25\n... signal 001900 of 008050, f0=237.76, dbspl=52.25\n... signal 002000 of 008050, f0=262.51, dbspl=37.00\n... signal 002100 of 008050, f0=289.83, dbspl=21.75\n... signal 002200 of 008050, f0=289.83, dbspl=46.75\n... signal 002300 of 008050, f0=320.00, dbspl=31.50\n... signal 002400 of 008050, f0=320.00, dbspl=56.50\n... signal 002500 of 008050, f0=353.31, dbspl=41.25\n... signal 002600 of 008050, f0=390.08, dbspl=26.00\n... signal 002700 of 008050, f0=390.08, dbspl=51.00\n... signal 002800 of 008050, f0=430.69, dbspl=35.75\n... signal 002900 of 008050, f0=475.52, dbspl=20.50\n... signal 003000 of 008050, f0=475.52, dbspl=45.50\n... signal 003100 of 008050, f0=525.01, dbspl=30.25\n... signal 003200 of 008050, f0=525.01, dbspl=55.25\n... signal 003300 of 008050, f0=579.66, dbspl=40.00\n... signal 003400 of 008050, f0=640.00, dbspl=24.75\n... signal 003500 of 008050, f0=640.00, dbspl=49.75\n... signal 003600 of 008050, f0=706.62, dbspl=34.50\n... signal 003700 of 008050, f0=706.62, dbspl=59.50\n... signal 003800 of 008050, f0=780.17, dbspl=44.25\n... signal 003900 of 008050, f0=861.38, dbspl=29.00\n... signal 004000 of 008050, f0=861.38, dbspl=54.00\n... signal 004100 of 008050, f0=951.04, dbspl=38.75\n... signal 004200 of 008050, f0=1050.03, dbspl=23.50\n... signal 004300 of 008050, f0=1050.03, dbspl=48.50\n... signal 004400 of 008050, f0=1159.33, dbspl=33.25\n... signal 004500 of 008050, f0=1159.33, dbspl=58.25\n... signal 004600 of 008050, f0=1280.00, dbspl=43.00\n... signal 004700 of 008050, f0=1413.23, dbspl=27.75\n... signal 004800 of 008050, f0=1413.23, dbspl=52.75\n... signal 004900 of 008050, f0=1560.34, dbspl=37.50\n... signal 005000 of 008050, f0=1722.75, dbspl=22.25\n... signal 005100 of 008050, f0=1722.75, dbspl=47.25\n... signal 005200 of 008050, f0=1902.07, dbspl=32.00\n... signal 005300 of 008050, f0=1902.07, dbspl=57.00\n... signal 005400 of 008050, f0=2100.06, dbspl=41.75\n... signal 005500 of 008050, f0=2318.65, dbspl=26.50\n... signal 005600 of 008050, f0=2318.65, dbspl=51.50\n... signal 005700 of 008050, f0=2560.00, dbspl=36.25\n... signal 005800 of 008050, f0=2826.47, dbspl=21.00\n... signal 005900 of 008050, f0=2826.47, dbspl=46.00\n... signal 006000 of 008050, f0=3120.67, dbspl=30.75\n... signal 006100 of 008050, f0=3120.67, dbspl=55.75\n... signal 006200 of 008050, f0=3445.50, dbspl=40.50\n... signal 006300 of 008050, f0=3804.15, dbspl=25.25\n... signal 006400 of 008050, f0=3804.15, dbspl=50.25\n... signal 006500 of 008050, f0=4200.12, dbspl=35.00\n... signal 006600 of 008050, f0=4200.12, dbspl=60.00\n... signal 006700 of 008050, f0=4637.31, dbspl=44.75\n... signal 006800 of 008050, f0=5120.00, dbspl=29.50\n... signal 006900 of 008050, f0=5120.00, dbspl=54.50\n... signal 007000 of 008050, f0=5652.94, dbspl=39.25\n... signal 007100 of 008050, f0=6241.35, dbspl=24.00\n... signal 007200 of 008050, f0=6241.35, dbspl=49.00\n... signal 007300 of 008050, f0=6891.01, dbspl=33.75\n... signal 007400 of 008050, f0=6891.01, dbspl=58.75\n... signal 007500 of 008050, f0=7608.29, dbspl=43.50\n... signal 007600 of 008050, f0=8400.23, dbspl=28.25\n... signal 007700 of 008050, f0=8400.23, dbspl=53.25\n... signal 007800 of 008050, f0=9274.61, dbspl=38.00\n... signal 007900 of 008050, f0=10240.00, dbspl=22.75\n... signal 008000 of 008050, f0=10240.00, dbspl=47.75\n[END]: /om/user/msaddler/data_pitchnet/bernox2005/puretone_v01/stim.hdf5\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0d6ccc7c79243336b1dcebeeddf0ad89a648656 | 51,245 | ipynb | Jupyter Notebook | notebooks/4_ica_dimensionality.ipynb | SBRG/modulome_ppu | f63a0dfab124ea9022123ce6227d132f39de0108 | [
"MIT"
] | null | null | null | notebooks/4_ica_dimensionality.ipynb | SBRG/modulome_ppu | f63a0dfab124ea9022123ce6227d132f39de0108 | [
"MIT"
] | null | null | null | notebooks/4_ica_dimensionality.ipynb | SBRG/modulome_ppu | f63a0dfab124ea9022123ce6227d132f39de0108 | [
"MIT"
] | null | null | null | 84.983416 | 33,364 | 0.751839 | [
[
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Load-Data\" data-toc-modified-id=\"Load-Data-1\"><span class=\"toc-item-num\">1 </span>Load Data</a></span></li><li><span><a href=\"#Compare-dimensionalities\" data-toc-modified-id=\"Compare-dimensionalities-2\"><span class=\"toc-item-num\">2 </span>Compare dimensionalities</a></span></li><li><span><a href=\"#Find-"single-gene"-iModulons\" data-toc-modified-id=\"Find-"single-gene"-iModulons-3\"><span class=\"toc-item-num\">3 </span>Find \"single-gene\" iModulons</a></span></li><li><span><a href=\"#Plot-Components\" data-toc-modified-id=\"Plot-Components-4\"><span class=\"toc-item-num\">4 </span>Plot Components</a></span></li></ul></div>",
"_____no_output_____"
]
],
[
[
"from pymodulon.core import IcaData\nimport os\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom scipy import stats\nimport numpy as np\nfrom tqdm.notebook import tqdm",
"_____no_output_____"
],
[
"# Directory containing ICA outputs\nDATA_DIR = '../data/interim/ica_runs'",
"_____no_output_____"
]
],
[
[
"# Load Data",
"_____no_output_____"
]
],
[
[
"def load_M(dim):\n return pd.read_csv(os.path.join(DATA_DIR,str(dim),'S.csv'),index_col=0)\n\ndef load_A(dim):\n return pd.read_csv(os.path.join(DATA_DIR,str(dim),'A.csv'),index_col=0)",
"_____no_output_____"
],
[
"dims = sorted([int(x) for x in os.listdir(DATA_DIR)])\nM_data = [load_M(dim) for dim in dims]\nA_data = [load_A(dim) for dim in dims]",
"_____no_output_____"
],
[
"n_components = [m.shape[1] for m in M_data]",
"_____no_output_____"
]
],
[
[
"# Compare dimensionalities",
"_____no_output_____"
]
],
[
[
"final_m = M_data[-1]\nthresh = 0.7\n",
"_____no_output_____"
],
[
"n_final_mods = []\nfor m in tqdm(M_data):\n corrs = pd.DataFrame(index=final_m.columns,columns=m.columns)\n for col1 in final_m.columns:\n for col2 in m.columns:\n corrs.loc[col1,col2] = abs(stats.pearsonr(final_m[col1],m[col2])[0])\n n_final_mods.append(len(np.where(corrs > thresh)[0]))",
"_____no_output_____"
]
],
[
[
"# Find \"single-gene\" iModulons\nAt a high enough dimensionality, some iModulons track the expression trajectory of a single iModulon",
"_____no_output_____"
]
],
[
[
"n_single_genes = []\nfor m in tqdm(M_data):\n counter = 0\n for col in m.columns:\n sorted_genes = abs(m[col]).sort_values(ascending=False)\n if sorted_genes.iloc[0] > 2 * sorted_genes.iloc[1]:\n counter += 1\n n_single_genes.append(counter)",
"_____no_output_____"
]
],
[
[
"# Plot Components",
"_____no_output_____"
]
],
[
[
"non_single_components = np.array(n_components) - np.array(n_single_genes)",
"_____no_output_____"
],
[
"DF_stats = pd.DataFrame([n_components,n_final_mods,non_single_components,n_single_genes],\n index=['Robust Components','Final Components','Multi-gene Components',\n 'Single Gene Components'],\n columns=dims).T\nDF_stats.sort_index(inplace=True)",
"_____no_output_____"
],
[
"dimensionality = DF_stats[DF_stats['Final Components'] >= DF_stats['Multi-gene Components']].iloc[0].name\nprint('Optimal Dimensionality:',dimensionality)",
"Optimal Dimensionality: 220\n"
],
[
"plt.plot(dims,n_components,label='Robust Components')\nplt.plot(dims,n_final_mods,label='Final Components')\nplt.plot(dims,non_single_components,label='Non-single-gene Components')\nplt.plot(dims,n_single_genes,label='Single Gene Components')\n\nplt.vlines(dimensionality,0,max(n_components),linestyle='dashed')\n\nplt.xlabel('Dimensionality')\nplt.ylabel('# Components')\nplt.legend(bbox_to_anchor=(1,1))",
"_____no_output_____"
],
[
"DF_stats",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0d6e1c1a09f147f5314df4a7f46dade1fd8c89d | 14,173 | ipynb | Jupyter Notebook | Chapter01/Exercise04/Exercise04.ipynb | MaheshPackt/Data-Science-Projects-2nd | 400f575c90945227fc74c7bb07cc7c7b1e413969 | [
"MIT"
] | null | null | null | Chapter01/Exercise04/Exercise04.ipynb | MaheshPackt/Data-Science-Projects-2nd | 400f575c90945227fc74c7bb07cc7c7b1e413969 | [
"MIT"
] | null | null | null | Chapter01/Exercise04/Exercise04.ipynb | MaheshPackt/Data-Science-Projects-2nd | 400f575c90945227fc74c7bb07cc7c7b1e413969 | [
"MIT"
] | null | null | null | 25.263815 | 198 | 0.363508 | [
[
[
"# Exercise 1.04: Continuing Verification of Data Integrity",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"df = pd.read_excel(\n '../../Data/default_of_credit_card_clients__courseware_version_1_21_19.xls')",
"_____no_output_____"
],
[
"id_counts = df['ID'].value_counts()\nid_counts.head()",
"_____no_output_____"
],
[
"dupe_mask = id_counts == 2",
"_____no_output_____"
],
[
"dupe_mask[0:5]",
"_____no_output_____"
],
[
"id_counts.index[0:5]",
"_____no_output_____"
],
[
"dupe_ids = id_counts.index[dupe_mask]",
"_____no_output_____"
],
[
"dupe_ids = list(dupe_ids)\nlen(dupe_ids)",
"_____no_output_____"
],
[
"dupe_ids[0:5]",
"_____no_output_____"
],
[
"df.loc[df['ID'].isin(dupe_ids[0:3]),:]",
"_____no_output_____"
]
],
[
[
"We can see some duplicates here, and it looks like every duplicate ID has one row with data, and another row with all zeros. Is this the case for every duplicate ID? Let's check.",
"_____no_output_____"
]
],
[
[
"df.shape",
"_____no_output_____"
],
[
"df_zero_mask = df == 0",
"_____no_output_____"
],
[
"feature_zero_mask = df_zero_mask.iloc[:,1:].all(axis=1)",
"_____no_output_____"
],
[
"sum(feature_zero_mask)",
"_____no_output_____"
]
],
[
[
"It looks like there are at least as many \"zero rows\" as there are duplicate IDs. Let's remove all the rows with all zero features and response, and see if that gets rid of the duplicate IDs.",
"_____no_output_____"
]
],
[
[
"df_clean_1 = df.loc[~feature_zero_mask,:].copy()",
"_____no_output_____"
],
[
"df_clean_1.shape",
"_____no_output_____"
],
[
"df_clean_1['ID'].nunique()",
"_____no_output_____"
]
],
[
[
"Looks like this worked. Save progress for next exercise.",
"_____no_output_____"
]
],
[
[
"df_clean_1.to_csv('../../Data/df_clean_1.csv', index=False)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0d6f2047d71f434289ae9c4828a3f69651f2e7e | 12,324 | ipynb | Jupyter Notebook | 01b - Image Classification.ipynb | jonnalakarthik/ai-fundamentals | 0ce47ff85bb7c6cadba227e4542d23cd6678f45c | [
"MIT"
] | 1 | 2020-07-23T14:25:02.000Z | 2020-07-23T14:25:02.000Z | 01b - Image Classification.ipynb | jonnalakarthik/ai-fundamentals | 0ce47ff85bb7c6cadba227e4542d23cd6678f45c | [
"MIT"
] | null | null | null | 01b - Image Classification.ipynb | jonnalakarthik/ai-fundamentals | 0ce47ff85bb7c6cadba227e4542d23cd6678f45c | [
"MIT"
] | 1 | 2020-07-23T14:25:03.000Z | 2020-07-23T14:25:03.000Z | 70.422857 | 512 | 0.643298 | [
[
[
"# Image Classification\n\nThe *Computer Vision* cognitive service provides useful pre-built models for working with images, but you'll often need to train your own model for computer vision. For example, suppose the Northwind Traders retail company wants to create an automated checkout system that identifies the grocery items customers want to buy based on an image taken by a camera at the checkout. To do this, you'll need to train a classification model that can classify the images to identify the item being purchased.\n\n<p style='text-align:center'><img src='./images/image-classification.jpg' alt='A robot holding a clipboard, classifying pictures of an apple, a banana, and an orange'/></p>\n\nIn Azure, you can use the ***Custom Vision*** cognitive service to train an image classification model based on existing images. There are two elements to creating an image classification solution. First, you must train a model to recognize different classes using existing images. Then, when the model is trained you must publish it as a service that can be consumed by applications.\n\n## Create a Custom Vision resource\n\nTo use the Custom Vision service, you need an Azure resource that you can use to *train* a model, and a resource with which you can *publish* it for applications to use. The resource for either (or both) tasks can be a general **Cognitive Services** resource, or a specific **Custom Vision** resource. You can use the same Cognitive Services resource for each of these tasks, or you can use different resources (in the same region) for each task to manage costs separately.\n\nUse the following instructions to create a new **Custom Vision** resource.\n\n1. In a new browser tab, open the Azure portal at [https://portal.azure.com](https://portal.azure.com), and sign in using the Microsoft account associated with your Azure subscription.\n2. Select the **+Create a resource** button, search for *custom vision*, and create a **Custom Vision** resource with the following settings:\n - **Create options**: Both\n - **Subscription**: *Your Azure subscription*\n - **Resource group**: *Create a new resource group with a unique name*\n - **Name**: *Enter a unique name*\n - **Training location**: *Choose any available region*\n - **Training pricing tier**: F0\n - **Prediction location**: *The same region as the training resource*\n - **Prediction pricing tier**: F0\n\n > **Note**: If you already have an F0 custom vision service in your subscription, select **S0** for this one.\n\n3. Wait for the resources to be created, and note that two Custom Vision resources are provisioned; one for training, an another for prediction. You can view these by navigating to the resource group where you created them.\n\n## Create a Custom Vision project\n\nTo train an object detection model, you need to create a Custom Vision project based on your training resource. To do this, you'll use the Custom Vision portal.\n\n1. Download and extract the training images from https://aka.ms/fruit-images.\n2. In another browser tab, open the Custom Vision portal at [https://customvision.ai](https://customvision.ai). If prompted, sign in using the Microsoft account associated with your Azure subscription and agree to the terms of service.\n3. In the Custom Vision portal, create a new project with the following settings:\n - **Name**: Grocery Checkout\n - **Description**: Image classification for groceries\n - **Resource**: *The Custom Vision resource you created previously*\n - **Project Types**: Classification\n - **Classification Types**: Multiclass (single tag per image)\n - **Domains**: Food\n4. Click **\\[+\\] Add images**, and select all of the files in the **apple** folder you extracted previously. Then upload the image files, specifying the tag *apple*, like this:\n <p style='text-align:center'><img src='./images/upload_apples.jpg' alt='Upload apple with apple tag'/></p>\n5. Repeat the previous step to upload the images in the **banana** folder with the tag *banana*, and the images in the **orange** folder with the tag *orange*.\n6. Explore the images you have uploaded in the Custom Vision project - there should be 15 images of each class, like this:\n <p style='text-align:center'><img src='./images/fruit.jpg' alt='Tagged images of fruit - 15 apples, 15 bananas, and 15 oranges'/></p>\n7. In the Custom Vision project, above the images, click **Train** to train a classification model using the tagged images. Select the **Quick Training** option, and then wait for the training iteration to complete (this may take a minute or so).\n8. When the model iteration has been trained, review the *Precision*, *Recall*, and *AP* performance metrics - these measure the prediction accuracy of the classification model, and should all be high.\n\n## Test the model\n\nBefore publishing this iteration of the model for applications to use, you should test it.\n\n1. Above the performance metrics, click **Quick Test**.\n2. In the **Image URL** box, type `https://aka.ms/apple-image` and click ➔\n3. View the predictions returned by your model - the probability score for *apple* should be the highest, like this:\n <p style='text-align:center'><img src='./images/test-apple.jpg' alt='An image with a class prediction of apple'/></p>\n4. Close the **Quick Test** window.\n\n## Publish and consume the image classification model\n\nNow you're ready to publish your trained model and use it from a client application.\n\n9. Click **🗸 Publish** to publish the trained model with the following settings:\n - **Model name**: groceries\n - **Prediction Resource**: *The prediction resource you created previously*.\n10. After publishing, click the *settings* (⚙) icon at the top right of the **Performance** page to view the project settings. Then, under **General** (on the left), copy the **Project Id** and paste it into the code cell below (replacing **YOUR_PROJECT_ID**).\n <p style='text-align:center'><img src='./images/cv_project_settings.jpg' alt='Project ID in project settings'/></p>\n\n> _**Note**: If you used a **Cognitive Services** resource instead of creating a **Custom Vision** resource at the beginning of this exercise, you can copy its key and endpoint from the right side of the project settings, paste it into the code cell below, and run it to see the results. Otherwise, continue completing the steps below to get the key and endpoint for your Custom Vision prediction resource._\n\n11. At the top left of the **Project Settings** page, click the *Projects Gallery* (👁) icon to return to the Custom Vision portal home page, where your project is now listed.\n12. On the Custom Vision portal home page, at the top right, click the *settings* (⚙) icon to view the settings for your Custom Vision service. Then, under **Resources**, expand your *prediction* resource (<u>not</u> the training resource) and copy its **Key** and **Endpoint** values to the code cell below, replacing **YOUR_KEY** and **YOUR_ENDPOINT**.\n <p style='text-align:center'><img src='./images/cv_settings.jpg' alt='Prediction resource key and endpoint in custom vision settings'/></p>\n13. Run the code cell below to set the variables to your project ID, key, and endpoint values.",
"_____no_output_____"
]
],
[
[
"project_id = 'YOUR_PROJECT_ID'\ncv_key = 'YOUR_KEY'\ncv_endpoint = 'YOUR_ENDPOINT'\n\nmodel_name = 'groceries' # this must match the model name you set when publishing your model iteration (it's case-sensitive)!\nprint('Ready to predict using model {} in project {}'.format(model_name, project_id))",
"_____no_output_____"
]
],
[
[
"Client applications can use the details above to connect to and your custom vision classification model.\n\nRun the following code cell to classifiy a selection of test images using your published model.\n\n> **Note**: Don't worry too much about the details of the code. It uses the Computer Vision SDK for Python to get a class prediction for each image in the /data/image-classification/test-fruit folder",
"_____no_output_____"
]
],
[
[
"from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient\nimport matplotlib.pyplot as plt\nfrom PIL import Image\nimport os\n%matplotlib inline\n\n# Get the test images from the data/vision/test folder\ntest_folder = os.path.join('data', 'image-classification', 'test-fruit')\ntest_images = os.listdir(test_folder)\n\n# Create an instance of the prediction service\ncustom_vision_client = CustomVisionPredictionClient(cv_key, endpoint=cv_endpoint)\n\n# Create a figure to display the results\nfig = plt.figure(figsize=(16, 8))\n\n# Get the images and show the predicted classes for each one\nprint('Classifying images in {} ...'.format(test_folder))\nfor i in range(len(test_images)):\n # Open the image, and use the custom vision model to classify it\n image_contents = open(os.path.join(test_folder, test_images[i]), \"rb\")\n classification = custom_vision_client.classify_image(project_id, model_name, image_contents.read())\n # The results include a prediction for each tag, in descending order of probability - get the first one\n prediction = classification.predictions[0].tag_name\n # Display the image with its predicted class\n img = Image.open(os.path.join(test_folder, test_images[i]))\n a=fig.add_subplot(len(test_images)/3, 3,i+1)\n a.axis('off')\n imgplot = plt.imshow(img)\n a.set_title(prediction)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Hopefully, your image classification model has correctly identified the groceries in the images.\n\n## Learn more\n\nThe Custom Vision service offers more capabilities than we've explored in this exercise. For example, you can also use the Custom Vision service to create *object detection* models; which not only classify objects in images, but also identify *bounding boxes* that show the location of the object in the image.\n\nTo learn more about the Custom Vision cognitive service, view the [Custom Vision documentation](https://docs.microsoft.com/azure/cognitive-services/custom-vision-service/home)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0d6f9ebde36ef48c24e295492c2964383455e6d | 110,941 | ipynb | Jupyter Notebook | Regression/Project_To_Plane.ipynb | evstigneevnm/NS3DPeriodic | 094c168f153e36f00ac9103675416ddb9ee6b586 | [
"BSD-3-Clause"
] | null | null | null | Regression/Project_To_Plane.ipynb | evstigneevnm/NS3DPeriodic | 094c168f153e36f00ac9103675416ddb9ee6b586 | [
"BSD-3-Clause"
] | null | null | null | Regression/Project_To_Plane.ipynb | evstigneevnm/NS3DPeriodic | 094c168f153e36f00ac9103675416ddb9ee6b586 | [
"BSD-3-Clause"
] | null | null | null | 288.908854 | 66,562 | 0.921138 | [
[
[
"import numpy as np\nimport pandas as pd\nfrom sklearn import linear_model\nfrom matplotlib import pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n%matplotlib inline\n#%matplotlib notebook",
"_____no_output_____"
],
[
"data=pd.read_csv('test_point.dat', header = None, sep = ' ')\ndata1=data.dropna(axis=1)",
"_____no_output_____"
],
[
"data1.shape[1]\n#data1.iloc[:,3]",
"_____no_output_____"
],
[
"ols=linear_model.LinearRegression()",
"_____no_output_____"
],
[
"ols.fit(data1.iloc[:,3:5], data1.iloc[:,5:data1.shape[1]])",
"_____no_output_____"
],
[
"ols.coef_",
"_____no_output_____"
],
[
"print \"maximum error =\", np.max(ols.predict(data1.iloc[:,3:5])-data1.iloc[:,5:data1.shape[1]].values)",
"maximum error = 0.0008663649047082611\n"
],
[
"def generate_ractangle(Nx, Ny, Lx, Ly, center_x, center_y, theta_grad):\n \n theta=theta_grad*np.pi/180.0\n M=[[np.cos(theta),np.sin(theta)], [-np.sin(theta), np.cos(theta)]]\n rectangle_x=np.zeros([Nx,Ny])\n rectangle_y=np.zeros([Nx,Ny])\n dx=Lx/float(Nx);\n dy=Ly/float(Ny);\n for j in xrange(Nx):\n for k in xrange(Ny):\n [x,y]=np.dot(M,[j*dx-Lx/2.0, k*dy-Ly/2.0]);\n rectangle_x[j,k]=x + center_x;\n rectangle_y[j,k]=y + center_y;\n \n \n return rectangle_x, rectangle_y, M",
"_____no_output_____"
],
[
"rectangle_x=np.zeros([20,5])\ncenter_x=0.5*(data1.iloc[:,3].max()+data1.iloc[:,3].min())\ncenter_y=0.5*(data1.iloc[:,4].max()+data1.iloc[:,4].min())\nrectangle_x, rectangle_y, M = generate_ractangle(20, 5, 0.013, 0.00007, center_x*1.0002, center_y,59.0)\nplt.axis([np.min(rectangle_x)-0.0001,np.max(rectangle_x)+0.0001,np.min(rectangle_y)-0.0001,np.max(rectangle_y)+0.0001])\nplt.scatter(rectangle_x,rectangle_y)\nplt.scatter(data1.iloc[:,3],data1.iloc[:,4])",
"_____no_output_____"
],
[
"#ols.predict(data1.iloc[:,3:5])\nrectangle=np.zeros([100,2])\ncount=0\nfor j in xrange(20):\n for k in xrange(5):\n rectangle[count,0]=rectangle_x[j,k]\n rectangle[count,1]=rectangle_y[j,k]\n count=count+1\n\n",
"_____no_output_____"
],
[
"projections0=ols.predict(rectangle)\nprojections=np.zeros([data1.shape[0],data1.shape[1]])\nprojections[:,5:data1.shape[1]]=projections0[:,0:projections0.shape[1]]\nprojections[:,3:5]=rectangle[:,:]\nprint data1.shape, projections.shape\n\nplt.scatter(data1.iloc[:,222],data1.iloc[:,2800])\nplt.scatter(projections[:,222],projections[:,2800])\n\n#write_out(projections)\n\nnp.savetxt('test_point_6.dat', projections, fmt='%.16e')",
"(100, 24576) (100, 24576)\n"
],
[
"data_res=pd.read_csv('test_point_out_6_2.dat', header = None, sep = ' ')\ndata_res1=data_res.dropna(axis=1)",
"_____no_output_____"
],
[
"def print_out_pos(filename, data_x, data_y):\n S=data_x.size\n outF = open(filename, \"w\")\n outF.write(\"View \\\"%s\\\"{\\n\" % filename)\n for x in xrange(S):\n outF.write(\"SP(%.16le,%.16le,0.0){%i};\\n\" % (data_x[x], data_y[x],x))\n \n outF.write(\"};\")\n outF.close();\n \n \n ",
"_____no_output_____"
],
[
"px=21020#234\npy=10011#4111\nplt.axis([np.min(data_res1.iloc[:,px])-0.0001,np.max(data_res1.iloc[:,px])+0.0001,np.min(data_res1.iloc[:,py])-0.0001,np.max(data_res1.iloc[:,py])+0.0001])\nplt.scatter(data_res1.iloc[:,px],data_res1.iloc[:,py])\nplt.scatter(projections[:,px],projections[:,py])\nprint_out_pos(\"test6_2_2.pos\",projections[:,px],projections[:,py])\nprint_out_pos(\"test6_2_3.pos\",data_res1.iloc[:,px],data_res1.iloc[:,py])",
"_____no_output_____"
],
[
"px=312\npy=3424\npz=54\nfig = plt.figure()\nax = Axes3D(fig)\nax.scatter(data1.iloc[:,px],data1.iloc[:,py],data1.iloc[:,pz])\n#ax.scatter(ols.predict(data1.iloc[:,3:5])[:,1130],ols.predict(data1.iloc[:,3:5])[:,12183],ols.predict(data1.iloc[:,3:5])[:,66])\nax.scatter(data_res1.iloc[:,px],data_res1.iloc[:,py],data_res1.iloc[:,pz])",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0d70eee9ffa4a05c9db3a209e951ccd8fb00dfc | 16,096 | ipynb | Jupyter Notebook | jupyter_notebooks/s3_server/s3_storage.ipynb | threefoldfoundation/info_gridmanual | 70b92db9303c1575b0badcfbd0d20f3f680a83e9 | [
"Apache-2.0"
] | 3 | 2021-02-24T04:05:30.000Z | 2021-05-21T22:49:22.000Z | jupyter_notebooks/s3_server/s3_storage.ipynb | threefoldfoundation/info_gridmanual | 70b92db9303c1575b0badcfbd0d20f3f680a83e9 | [
"Apache-2.0"
] | 128 | 2020-07-21T08:33:29.000Z | 2021-01-25T10:45:47.000Z | jupyter_notebooks/s3_server/s3_storage.ipynb | threefoldfoundation/info_sdk | 3ce272c042886b3820e294ca545882e19ce36b97 | [
"Apache-2.0"
] | 5 | 2020-04-08T09:56:11.000Z | 2020-05-21T11:21:19.000Z | 45.988571 | 508 | 0.604063 | [
[
[
"## Deploy a simple S3 dispersed storage archive solution\n\n#### Requirements\n\nIn order to be able to deploy this example deployment you will have to have the following components activated\n- the 3Bot SDK, in the form of a local container with the SDK, or a grid based SDK container. Getting started instuctions are [here](https://github.com/Threefoldfoundation/info_projectX/tree/development/doc/jumpscale_SDK) \n- if you use a locally installed container with the 3Bot SDK you need to have the wireguard software installed. Instructions to how to get his installed on your platform could be found [here](https://www.wireguard.com/install/)\n- capacity reservation are not free so you will need to have some ThreeFold_Tokens (TFT) to play around with. Instructions to get tokens could be found [here](https://github.com/Threefoldfoundation/info_projectX/blob/development/doc/jumpscale_SDK_information/payment/FreeTFT_testtoken.md)\n\nAfter following these install instructions you should end up having a local, working TF Grid SDK installed. You could work / connect to the installed SDK as described [here](https://github.com/Threefoldfoundation/info_projectX/blob/development/doc/jumpscale_SDK/SDK_getting_started.md)\n\n### Overview\nThe design a simple S3 archive solution we need to follow a few simple steps:\n- create (or identify and use) an overlay network that spans all of the nodes needed in the solution\n- identify which nodes are involved in the archive for storage and which nodes are running the storage software\n- create reservations on the storage nodes for low level storage. Create and deploy zero-DB's\n- collect information of how to access and use the low level storage devices to be passed on to the S3 storage software\n- design the architecture, data and parity disk design\n- deploy the S3 software in a container\n\n#### Create overlay network of identity an previously deployed overlay network\n\nEach overlay network is private and contains private IP addresses. Each overlay network is deployed in such a way that is has no connection to the public (IPv4 or IPv6) network directly. In order to work with such a network a tunnel needs to be created between the overlay network on the grid and your local network. You could find instructions how to do that [here](https://github.com/Threefoldfoundation/info_projectX/blob/development/doc/jumpscale_SDK_examples/network/overlay_network.md)\n",
"_____no_output_____"
],
[
"#### Set up the capacity environment to find, reserve and configure\n\nMake sure that your SDK points to the mainnet explorer for deploying this capacity example. Also make sure you have an identity loaded. The example code uses the default identity. Multiple identities could be stored in the TF Grid SDK. To check your available identities you could request the number of identities available for you by typing `j.tools.threebot.me` in the kosmos shell.\n",
"_____no_output_____"
]
],
[
[
"from Jumpscale import j\nimport time\n\nj.clients.explorer.default_addr_set('explorer.grid.tf')\n\n# Which identities are available in you SDK\nj.tools.threebot.me\n\n# Make sure I have an identity (set default one for mainnet of testnet)\nme = j.tools.threebot.me.default\n\n# Load the zero-os sal and reate empty reservation method\nzos = j.sal.zosv2\nr = zos.reservation_create()",
"_____no_output_____"
]
],
[
[
"#### Setup your overlay network (skip this step if you have a network setup and available)\n\nAn overlay network creates a private peer2peer network over selected nodes. In this notebook it is assumend you have created one by following this [notebook](https://github.com/Threefoldfoundation/info_projectX/blob/development/code/jupyter/SDK_examples/network/overlay_network.ipynb)\n\n#### Design the S3 simple storage solution\n\nYou have created a network in the network creation [notebook](https://github.com/Threefoldfoundation/info_projectX/blob/development/code/jupyter/SDK_examples/network/overlay_network.ipynb) with the following details:\n```\ndemo_ip_range=\"172.20.0.0/16\"\ndemo_port=8030\ndemo_network_name=\"demo_network_name_01\"\n```\n\nWhen you executed the reservation it also provided you with a data on order number, node ID and private network range on the node. All the nodes in the network are connected peer2peer with a wireguard tunnel. On these nodes we could now create a storage solution. For this solution we will using some of these nodes as raw storage provider nodes and others as the storage application nodes. Using the ouput of the network reservation notebook to describe the high level design of the storage solution:\n\n| Nr. | Location | Node ID. | IPV4 network | Function. |\n|--------|---|---|---|---|\n| 1 | Salzburg | 9kcLeTuseybGHGWw2YXvdu4kk2jZzyZCaCHV9t6Axqqx | 172.20.15.0/24 | Storage sofware container, 10GB raw |\n| 2 | Salzburg | 3h4TKp11bNWjb2UemgrVwayuPnYcs2M1bccXvi3jPR2Y | 172.20.16.0/24 | 10GB raw |\n| 3 | Salzburg | FUq4Sz7CdafZYV2qJmTe3Rs4U4fxtJFcnV6mPNgGbmRg | 172.20.17.0/24 | 10GB raw |\n| 4 | Vienna | 9LmpYPBhnrL9VrboNmycJoGfGDjuaMNGsGQKeqrUMSii | 172.20.28.0/24 | 10GB raw |\n| 5 | Vienna | 3FPB4fPoxw8WMHsqdLHamfXAdUrcRwdZY7hxsFQt3odL | 172.20.29.0/24 | 10GB raw |\n| 6 | Vienna | CrgLXq3w2Pavr7XrVA7HweH6LJvLWnKPwUbttcNNgJX7 | 172.20.30.0/24 | 10GB raw |\n\n\n#### Reserve and deploy the low level ZeroDB storage nodes\n\nFirst let's deploy low level storage capacity manager (Zero BD, more info [here](https://github.com/Threefoldtech/0-DB)). In the next piece of code we do the following:\n- create some empty reservation and result structures\n- select and set the node to container the S3 software\n- select and load the nodes in a list to push them in the zero-DB reservation structure",
"_____no_output_____"
]
],
[
[
"# load the zero-os sal\nzos = j.sal.zosv2\n\nday=24*60*60\nhour=60*60\n\n# Node: 5 ID: 9kcLeTuseybGHGWw2YXvdu4kk2jZzyZCaCHV9t6Axqqx IPv4 address: 172.20.15.0/24\nminio_node_id = '9kcLeTuseybGHGWw2YXvdu4kk2jZzyZCaCHV9t6Axqqx'\nminio_node_ip = '172.20.15.16'\n# ----------------------------------------------------------------------------------\nreservation_network = zos.reservation_create()\nreservation_zdbs = zos.reservation_create()\nreservation_storage = zos.reservation_create()\n\nrid_network=0\nrid_zdbs=0\nrid_storage=0\n\npassword = \"supersecret\"\n\n# ----------------------------------------------------------------------------------\n# Select and create a reservation for nodes to deploy a ZDB\n# first find the node where to reserve 0-DB namespaces. Select all the salzburg nodes\n# ----------------------------------------------------------------------------------\n\nnodes_salzburg = zos.nodes_finder.nodes_search(farm_id=12775) # (IPv6 nodes)\nnodes_vienna_1 = zos.nodes_finder.nodes_search(farm_id=82872) # (IPv6 nodes)\n\n# ----------------------------------------------------------------------------------\n# Definition of functional nodes\n# ----------------------------------------------------------------------------------\nnodes_all = nodes_salzburg[5:8] + nodes_vienna_1[5:8]\n\n# ----------------------------------------------------------------------------------\n# Create ZDB reservation for the selected nodes\n# ----------------------------------------------------------------------------------\nfor node in nodes_all:\n zos.zdb.create(\n reservation=reservation_zdbs,\n node_id=node.node_id,\n size=10,\n mode='seq',\n password='supersecret',\n disk_type=\"SSD\",\n public=False)\n ",
"_____no_output_____"
]
],
[
[
"#### Prepare and deploy the S3 software container\n\nThe nodes that will run the storage solution needs some persistent storage. This will create a reservation for a volume on the same node as the software runs and attached this as a volume to the container that will run the storage software. For the reservation duration please set a period of time that allows for expermenting, in this case it is set for one day. ",
"_____no_output_____"
]
],
[
[
"# Storage solution reservation time\nnr_of_hours=24\n\n# ----------------------------------------------------------------------------------\n# Attach persistant storage to container - for storing metadata\n# ---------------------------------------------------------------------------------- \nvolume = zos.volume.create(reservation_storage,minio_node_id,size=10,type='SSD')\nvolume_rid = zos.reservation_register(reservation_storage, j.data.time.epoch+(nr_of_hours*hour), identity=me)\nresults = zos.reservation_result(volume_rid)\n\n# ----------------------------------------------------------------------------------\n# Actuate the reservation for the ZDB's The IP addresses are going to be selfassigned.\n# ----------------------------------------------------------------------------------\nexpiration = j.data.time.epoch + (nr_of_hours*hour)\n\n# register the reservation\nrid_zdb = zos.reservation_register(reservation_zdbs, expiration, identity=me)\ntime.sleep(5)\n\nresults = zos.reservation_result(rid_zdb)",
"_____no_output_____"
]
],
[
[
"With the low level zero-DB reservations done and stored the `results` variable (these storage managers will get an IPv4 address assigned from the local `/24` node network. We need to store those addresses in `namespace_config` to pass it to the container running the storage software.",
"_____no_output_____"
]
],
[
[
"# ----------------------------------------------------------------------------------\n# Read the IP address of the 0-DB namespaces after they are deployed\n# we will need these IPs when creating the minio container\n# ----------------------------------------------------------------------------------\nnamespace_config = []\nfor result in results:\n data = result.data_json\n cfg = f\"{data['Namespace']}:{password}@[{data['IPs']}]:{data['Port']}\"\n namespace_config.append(cfg)\n \n# All IP's for the zdb's are now known and stored in the namespace_config structure.\nprint(namespace_config)",
"_____no_output_____"
]
],
[
[
"```\n['9012-4:supersecret@[2a04:7700:1003:1:54f0:edff:fe87:2c48]:9900', '9012-1:supersecret@[2a02:16a8:1000:0:5c2f:ddff:fe5a:1a70]:9900', '9012-2:supersecret@[2a02:16a8:1000:0:1083:59ff:fe38:ce71]:9900', '9012-7:supersecret@[2003:d6:2f32:8500:dc78:d6ff:fe04:7368]:9900', '9012-3:supersecret@[2a02:16a8:1000:0:fc7c:4aff:fec8:baf]:9900', '9012-5:supersecret@[2a04:7700:1003:1:acc0:2ff:fed3:1692]:9900', '9012-6:supersecret@[2a04:7700:1003:1:ac9d:f3ff:fe6a:47a9]:9900']\n```",
"_____no_output_____"
],
[
"Last step is to design the redundacy policy for the storage solution. We have 6 low level devices available (over 6 nodes, in 2 different data centers and cities). So we could build any of the following configurations:\n\n| Option | data storage devices | parity storage devices | total devices | overhead |\n|--------|---|---|---|---|\n| 1 | 3 | 3 | 6 | 50%% |\n| 2 | 4 | 2 | 6 | 33% |\n| 3 | 5 | 1 | 6 | 16% |\n\nNow in this example real efficiency of this solution is not achieved, in a real life deployment we would do something like this:\n\n| Option | data storage devices | parity storage devices | total devices | overhead |\n|--------|---|---|---|---|\n| 4 | 16 | 4 | 20 | 20% |\n\nIn that case it is highly unlikely that 4 distributed devices will fail at the same time, therefore this is a very robust storage solution\n\n\nHere we choose to deploy scenario 2 with 4 data disks and 2 parity disks.",
"_____no_output_____"
]
],
[
[
"# ----------------------------------------------------------------------------------\n# With the low level disk managers done and the IP adresses discovered we could now build\n# the reservation for the min.io S3 interface.\n# ----------------------------------------------------------------------------------\nreservation_minio = zos.reservation_create()\n\n# Make sure to adjust the node_id and network name to the appropriate in copy / paste mode :-)\nminio_container=zos.container.create(reservation=reservation_minio,\n node_id=minio_node_id,\n network_name=u_networkname,\n ip_address=minio_node_ip,\n Flist='https://hub.grid.tf/azmy.3Bot/minio.Flist',\n interactive=False, \n entrypoint='/bin/entrypoint',\n cpu=2,\n memory=2048,\n env={\n \"SHARDS\":','.join(namespace_config),\n \"DATA\":\"4\",\n \"PARITY\":\"2\",\n \"ACCESS_KEY\":\"minio\",\n \"SECRET_KEY\":\"passwordpassword\",\n })",
"_____no_output_____"
]
],
[
[
"With the definition of the S3 container done we now need to attached persistent storage on a volume to store metadata.",
"_____no_output_____"
]
],
[
[
"# ----------------------------------------------------------------------------------\n# Attach persistant storage to container - for storing metadata\n# ---------------------------------------------------------------------------------- \nzos.volume.attach_existing(\n container=minio_container,\n volume_id=f'{volume_rid}-{volume.workload_id}',\n mount_point='/data')",
"_____no_output_____"
]
],
[
[
"Last but not least, execute the resevation for the storage manager.",
"_____no_output_____"
]
],
[
[
"# ----------------------------------------------------------------------------------\n# Write reservation for min.io container in BCDB - end user interface\n# ---------------------------------------------------------------------------------- \nexpiration = j.data.time.epoch + (nr_of_hours*hour)\n# register the reservation\nrid = zos.reservation_register(reservation_minio, expiration, identity=me)\ntime.sleep(5)\n\nresults = zos.reservation_result(rid)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0d719259b7d0d4896b1ff39e96b7ca309d141a5 | 4,944 | ipynb | Jupyter Notebook | flask/torchTest.ipynb | JoshWorld/JoyurIZ-WEB | 83dcb5283f733501f6c30e8ccfec0a820ad580ae | [
"MIT"
] | 8 | 2020-04-28T13:55:11.000Z | 2021-09-30T12:58:09.000Z | flask/torchTest.ipynb | JoshWorld/JoyurIZ-WEB | 83dcb5283f733501f6c30e8ccfec0a820ad580ae | [
"MIT"
] | 1 | 2020-04-09T16:09:36.000Z | 2021-01-05T11:17:05.000Z | flask/torchTest.ipynb | JoshWorld/JoyurIZ-WEB | 83dcb5283f733501f6c30e8ccfec0a820ad580ae | [
"MIT"
] | 5 | 2020-10-11T08:23:52.000Z | 2022-03-20T09:53:04.000Z | 28.578035 | 94 | 0.483414 | [
[
[
"import torch\nfrom torchvision import transforms\nfrom PIL import Image",
"_____no_output_____"
],
[
"class CNN(torch.nn.Module):\n def __init__(self):\n super(CNN, self).__init__()\n self.keep_prob = 0.5\n self.layer1 = torch.nn.Sequential(\n torch.nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1),\n torch.nn.ReLU(),\n torch.nn.MaxPool2d(kernel_size=2, stride=2))\n self.layer2 = torch.nn.Sequential(\n torch.nn.Conv2d(32, 64, kernel_size=3, stride=1, padding=1),\n torch.nn.ReLU(),\n torch.nn.MaxPool2d(kernel_size=2, stride=2))\n self.layer3 = torch.nn.Sequential(\n torch.nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1),\n torch.nn.ReLU(),\n torch.nn.MaxPool2d(kernel_size=2, stride=1, padding=1))\n\n self.fc1 = torch.nn.Linear(128 * 65 * 65, 64, bias=True)\n torch.nn.init.xavier_uniform_(self.fc1.weight)\n\n self.layer4 = torch.nn.Sequential(\n self.fc1,\n torch.nn.ReLU(),\n torch.nn.Dropout(p=1 - self.keep_prob))\n self.fc2 = torch.nn.Linear(64, 3, bias=True)\n torch.nn.init.xavier_uniform_(self.fc2.weight)\n\n def forward(self, x):\n out = self.layer1(x)\n out = self.layer2(out)\n out = self.layer3(out)\n out = out.view(out.size(0), -1) # Flatten them for FC\n #print(out.shape)\n out = self.layer4(out)\n out = self.fc2(out)\n return out",
"_____no_output_____"
],
[
"def transform_image(file):\n trans = transforms.Compose([\n transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))\n ])\n image = Image.open(file)\n tensor_image = trans(image).unsqueeze_(0)\n return tensor_image",
"_____no_output_____"
],
[
"device = torch.device('cpu')\n\nmodel = torch.load('1.pt', map_location=device)\nprint(model.eval())\n\ntorch.save(model.state_dict(), \"2.pt\")",
"CNN(\n (layer1): Sequential(\n (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (1): ReLU()\n (2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n )\n (layer2): Sequential(\n (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (1): ReLU()\n (2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n )\n (layer3): Sequential(\n (0): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (1): ReLU()\n (2): MaxPool2d(kernel_size=2, stride=1, padding=1, dilation=1, ceil_mode=False)\n )\n (fc1): Linear(in_features=540800, out_features=64, bias=True)\n (layer4): Sequential(\n (0): Linear(in_features=540800, out_features=64, bias=True)\n (1): ReLU()\n (2): Dropout(p=0.5, inplace=False)\n )\n (fc2): Linear(in_features=64, out_features=3, bias=True)\n)\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
d0d736f31b4717b66a8ea4825df20013428199c8 | 35,968 | ipynb | Jupyter Notebook | _notebooks/2021-05-09-Pedestrian_detector.ipynb | anders447/sample-ds-blog-anders | abd23826d3193e4edc3f6d874a44c76cb65abb6d | [
"Apache-2.0"
] | null | null | null | _notebooks/2021-05-09-Pedestrian_detector.ipynb | anders447/sample-ds-blog-anders | abd23826d3193e4edc3f6d874a44c76cb65abb6d | [
"Apache-2.0"
] | null | null | null | _notebooks/2021-05-09-Pedestrian_detector.ipynb | anders447/sample-ds-blog-anders | abd23826d3193e4edc3f6d874a44c76cb65abb6d | [
"Apache-2.0"
] | null | null | null | 45.586819 | 1,430 | 0.546514 | [
[
[
"<a href=\"https://colab.research.google.com/github/anders447/sample-ds-blog-anders/blob/master/_notebooks/2021-05-09-Pedestrian_detector.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# pedestrian detector",
"_____no_output_____"
],
[
"https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html",
"_____no_output_____"
]
],
[
[
"pip install git+https://github.com/gautamchitnis/cocoapi.git@cocodataset-master#subdirectory=PythonAPI",
"Collecting git+https://github.com/gautamchitnis/cocoapi.git@cocodataset-master#subdirectory=PythonAPI\n Cloning https://github.com/gautamchitnis/cocoapi.git (to revision cocodataset-master) to /tmp/pip-req-build-0v7f304q\n Running command git clone -q https://github.com/gautamchitnis/cocoapi.git /tmp/pip-req-build-0v7f304q\n Running command git checkout -b cocodataset-master --track origin/cocodataset-master\n Switched to a new branch 'cocodataset-master'\n Branch 'cocodataset-master' set up to track remote branch 'cocodataset-master' from 'origin'.\nRequirement already satisfied (use --upgrade to upgrade): pycocotools==2.0 from git+https://github.com/gautamchitnis/cocoapi.git@cocodataset-master#subdirectory=PythonAPI in /usr/local/lib/python3.7/dist-packages\nBuilding wheels for collected packages: pycocotools\n Building wheel for pycocotools (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for pycocotools: filename=pycocotools-2.0-cp37-cp37m-linux_x86_64.whl size=264249 sha256=17c35a7c36ad6a8b630d42fbc47971ad4148e14463cd2af16c4acfa13a550e51\n Stored in directory: /tmp/pip-ephem-wheel-cache-vg75ykd3/wheels/49/ca/2b/1b99c52bb9e7a9804cd60d66243ec70c9f15977795793d2646\nSuccessfully built pycocotools\n"
],
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n"
],
[
"import zipfile\nwith zipfile.ZipFile(\"/content/drive/MyDrive/PennFudanPed.zip\", 'r') as zip_ref:\n zip_ref.extractall()",
"_____no_output_____"
],
[
"import sys\nsys.path.insert(1, '/content/drive/MyDrive/detection')\n\nimport utils\nimport transforms\nimport engine",
"_____no_output_____"
],
[
"import os\nimport numpy as np\nimport torch\nfrom PIL import Image\n\n\nclass PennFudanDataset(object):\n def __init__(self, root, transforms):\n self.root = root\n self.transforms = transforms\n # load all image files, sorting them to\n # ensure that they are aligned\n self.imgs = list(sorted(os.listdir(os.path.join(root, \"PNGImages\"))))\n self.masks = list(sorted(os.listdir(os.path.join(root, \"PedMasks\"))))\n\n def __getitem__(self, idx):\n # load images and masks\n img_path = os.path.join(self.root, \"PNGImages\", self.imgs[idx])\n mask_path = os.path.join(self.root, \"PedMasks\", self.masks[idx])\n img = Image.open(img_path).convert(\"RGB\")\n # note that we haven't converted the mask to RGB,\n # because each color corresponds to a different instance\n # with 0 being background\n mask = Image.open(mask_path)\n # convert the PIL Image into a numpy array\n mask = np.array(mask)\n # instances are encoded as different colors\n obj_ids = np.unique(mask)\n # first id is the background, so remove it\n obj_ids = obj_ids[1:]\n\n # split the color-encoded mask into a set\n # of binary masks\n masks = mask == obj_ids[:, None, None]\n\n # get bounding box coordinates for each mask\n num_objs = len(obj_ids)\n boxes = []\n for i in range(num_objs):\n pos = np.where(masks[i])\n xmin = np.min(pos[1])\n xmax = np.max(pos[1])\n ymin = np.min(pos[0])\n ymax = np.max(pos[0])\n boxes.append([xmin, ymin, xmax, ymax])\n\n # convert everything into a torch.Tensor\n boxes = torch.as_tensor(boxes, dtype=torch.float32)\n # there is only one class\n labels = torch.ones((num_objs,), dtype=torch.int64)\n masks = torch.as_tensor(masks, dtype=torch.uint8)\n\n image_id = torch.tensor([idx])\n area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])\n # suppose all instances are not crowd\n iscrowd = torch.zeros((num_objs,), dtype=torch.int64)\n\n target = {}\n target[\"boxes\"] = boxes\n target[\"labels\"] = labels\n target[\"masks\"] = masks\n target[\"image_id\"] = image_id\n target[\"area\"] = area\n target[\"iscrowd\"] = iscrowd\n\n if self.transforms is not None:\n img, target = self.transforms(img, target)\n\n return img, target\n\n def __len__(self):\n return len(self.imgs)",
"_____no_output_____"
],
[
"import torchvision\nfrom torchvision.models.detection.faster_rcnn import FastRCNNPredictor\n\n# load a model pre-trained pre-trained on COCO\nmodel = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)\n\n# replace the classifier with a new one, that has\n# num_classes which is user-defined\nnum_classes = 2 # 1 class (person) + background\n# get number of input features for the classifier\nin_features = model.roi_heads.box_predictor.cls_score.in_features\n# replace the pre-trained head with a new one\nmodel.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)",
"_____no_output_____"
],
[
"import torchvision\nfrom torchvision.models.detection import FasterRCNN\nfrom torchvision.models.detection.rpn import AnchorGenerator\n\n# load a pre-trained model for classification and return\n# only the features\nbackbone = torchvision.models.mobilenet_v2(pretrained=True).features\n# FasterRCNN needs to know the number of\n# output channels in a backbone. For mobilenet_v2, it's 1280\n# so we need to add it here\nbackbone.out_channels = 1280\n\n# let's make the RPN generate 5 x 3 anchors per spatial\n# location, with 5 different sizes and 3 different aspect\n# ratios. We have a Tuple[Tuple[int]] because each feature\n# map could potentially have different sizes and\n# aspect ratios\nanchor_generator = AnchorGenerator(sizes=((32, 64, 128, 256, 512),),\n aspect_ratios=((0.5, 1.0, 2.0),))\n\n# let's define what are the feature maps that we will\n# use to perform the region of interest cropping, as well as\n# the size of the crop after rescaling.\n# if your backbone returns a Tensor, featmap_names is expected to\n# be [0]. More generally, the backbone should return an\n# OrderedDict[Tensor], and in featmap_names you can choose which\n# feature maps to use.\nroi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=['0'],\n output_size=7,\n sampling_ratio=2)\n\n# put the pieces together inside a FasterRCNN model\nmodel = FasterRCNN(backbone,\n num_classes=2,\n rpn_anchor_generator=anchor_generator,\n box_roi_pool=roi_pooler)",
"_____no_output_____"
],
[
"import torchvision\nfrom torchvision.models.detection.faster_rcnn import FastRCNNPredictor\nfrom torchvision.models.detection.mask_rcnn import MaskRCNNPredictor\n\n\ndef get_model_instance_segmentation(num_classes):\n # load an instance segmentation model pre-trained pre-trained on COCO\n model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)\n\n # get number of input features for the classifier\n in_features = model.roi_heads.box_predictor.cls_score.in_features\n # replace the pre-trained head with a new one\n model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)\n\n # now get the number of input features for the mask classifier\n in_features_mask = model.roi_heads.mask_predictor.conv5_mask.in_channels\n hidden_layer = 256\n # and replace the mask predictor with a new one\n model.roi_heads.mask_predictor = MaskRCNNPredictor(in_features_mask,\n hidden_layer,\n num_classes)\n\n return model",
"_____no_output_____"
],
[
"import transforms as T\n\ndef get_transform(train):\n transforms = []\n transforms.append(T.ToTensor())\n if train:\n transforms.append(T.RandomHorizontalFlip(0.5))\n return T.Compose(transforms)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)\ndataset = PennFudanDataset('PennFudanPed', get_transform(train=True))\ndata_loader = torch.utils.data.DataLoader(\n dataset, batch_size=2, shuffle=True, num_workers=4,\n collate_fn=utils.collate_fn)\n# For Training\nimages,targets = next(iter(data_loader))\nimages = list(image for image in images)\ntargets = [{k: v for k, v in t.items()} for t in targets]\noutput = model(images,targets) # Returns losses and detections\n# For inference\nmodel.eval()\nx = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]\npredictions = model(x) # Returns predictions",
"/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:477: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n cpuset_checked))\n"
],
[
"from engine import train_one_epoch, evaluate\nimport utils\n\n\ndef main():\n # train on the GPU or on the CPU, if a GPU is not available\n device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')\n\n # our dataset has two classes only - background and person\n num_classes = 2\n # use our dataset and defined transformations\n dataset = PennFudanDataset('PennFudanPed', get_transform(train=True))\n dataset_test = PennFudanDataset('PennFudanPed', get_transform(train=False))\n\n # split the dataset in train and test set\n indices = torch.randperm(len(dataset)).tolist()\n dataset = torch.utils.data.Subset(dataset, indices[:-50])\n dataset_test = torch.utils.data.Subset(dataset_test, indices[-50:])\n\n # define training and validation data loaders\n data_loader = torch.utils.data.DataLoader(\n dataset, batch_size=2, shuffle=True, num_workers=4,\n collate_fn=utils.collate_fn)\n\n data_loader_test = torch.utils.data.DataLoader(\n dataset_test, batch_size=1, shuffle=False, num_workers=4,\n collate_fn=utils.collate_fn)\n\n # get the model using our helper function\n model = get_model_instance_segmentation(num_classes)\n\n # move model to the right device\n model.to(device)\n\n # construct an optimizer\n params = [p for p in model.parameters() if p.requires_grad]\n optimizer = torch.optim.SGD(params, lr=0.005,\n momentum=0.9, weight_decay=0.0005)\n # and a learning rate scheduler\n lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,\n step_size=3,\n gamma=0.1)\n\n # let's train it for 10 epochs\n num_epochs = 10\n\n for epoch in range(num_epochs):\n # train for one epoch, printing every 10 iterations\n train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)\n # update the learning rate\n lr_scheduler.step()\n # evaluate on the test dataset\n evaluate(model, data_loader_test, device=device)\n\n print(\"That's it!\")",
"_____no_output_____"
],
[
"main()",
"/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:477: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.\n cpuset_checked))\nDownloading: \"https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth\" to /root/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0d747cd243ded654a40fbadfe83a9121971eb81 | 8,506 | ipynb | Jupyter Notebook | notebooks/example.ipynb | faab5/errortools | b3d8972d884207aa8ab89dac98c23fd5f46a0c65 | [
"MIT"
] | 17 | 2019-04-20T18:00:09.000Z | 2021-03-07T15:37:08.000Z | notebooks/example.ipynb | faab5/errortools | b3d8972d884207aa8ab89dac98c23fd5f46a0c65 | [
"MIT"
] | 1 | 2019-04-17T15:00:15.000Z | 2019-04-17T15:00:15.000Z | notebooks/example.ipynb | faab5/errortools | b3d8972d884207aa8ab89dac98c23fd5f46a0c65 | [
"MIT"
] | 3 | 2019-05-19T13:43:41.000Z | 2019-11-17T16:29:52.000Z | 33.888446 | 126 | 0.567599 | [
[
[
"%pylab inline\nimport numpy as np\nimport pandas as pd\nimport scipy.stats\nfrom matplotlib.backends.backend_pdf import PdfPages",
"_____no_output_____"
],
[
"import sys\nsys.path.append(\"../errortools/\")\nimport errortools",
"_____no_output_____"
]
],
[
[
"# Fitting and predicting",
"_____no_output_____"
]
],
[
[
"ndim = 3\nfit_intercept = True\nndata = 100\n\np_true = [2, 0, -2, 0]",
"_____no_output_____"
],
[
"np.random.seed(42)\nX = np.random.uniform(low=-1, high=1, size=ndim*ndata).reshape(ndata, ndim)\np = scipy.stats.logistic.cdf(np.dot(np.concatenate((X, np.ones((X.shape[0],1), dtype=float)), axis=1), p_true))\ny = (p > np.random.uniform(size=ndata)).astype(int)",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(1, 3, figsize=(15,4))\n\nax[0].plot(X[y==0,0], X[y==0,1], 'o', color='orange', alpha=0.2, markersize=5)\nax[0].plot(X[y==1,0], X[y==1,1], 'o', color='green', alpha=0.2, markersize=5)\nax[0].set_xlabel(\"x0\")\nax[0].set_ylabel(\"x1\")\n\nax[1].plot(X[y==0,0], X[y==0,2], 'o', color='orange', alpha=0.2, markersize=5)\nax[1].plot(X[y==1,0], X[y==1,2], 'o', color='green', alpha=0.2, markersize=5)\nax[1].set_xlabel(\"x0\")\nax[1].set_ylabel(\"x2\")\n\nax[2].plot(X[y==0,1], X[y==0,2], 'o', color='orange', alpha=0.2, markersize=5)\nax[2].plot(X[y==1,1], X[y==1,2], 'o', color='green', alpha=0.2, markersize=5)\nax[2].set_xlabel(\"x1\")\nax[2].set_ylabel(\"x2\");",
"_____no_output_____"
],
[
"model = errortools.LogisticRegression(fit_intercept=True)\nmodel.fit(X,y)",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(1, 3, figsize=(20,5))\n\nnstddvs = 1\n\np = model.parameters\ncvr_mtx = model.cvr_mtx\nprc_mtx = np.linalg.inv(cvr_mtx)\n\nu = np.linspace(-2, 2, 100).reshape(-1,1)\na = np.zeros((100,1), dtype=float)\n\nx = np.concatenate((u, a, a), axis=1)\nf = model.predict(x)\nel1, eu1 = model.estimate_errors(x, nstddvs)\nes = model.estimate_errors_sampling(x, 100)\nel = model.estimate_errors_linear(x, 1)\ng = scipy.stats.logistic.cdf(np.dot(np.concatenate((x,np.ones((x.shape[0],1))),axis=1), p_true))\nax[0].plot(u, g, '-', color='black', alpha=1, label=\"true curve\")\nax[0].plot(u, f, '-', color='red', label=\"fitted curve\")\nax[0].fill_between(x=u.ravel(), y1=f-el1, y2=f+eu1, alpha=0.3, color='green', label=\"error\")\nax[0].fill_between(x=u.ravel(), y1=f-nstddvs*es, y2=f+nstddvs*es, alpha=0.3, color='orange', label=\"sampled error\")\nax[0].fill_between(x=u.ravel(), y1=f-nstddvs*el, y2=f+nstddvs*el, alpha=0.3, color='blue', label=\"linear error\")\nax[0].set_xlabel(\"x0\")\nax[0].set_ylabel(\"logistic prob\")\nax[0].legend()\n\nx = np.concatenate((a, u, a), axis=1)\nf = model.predict(x)\nel1, eu1 = model.estimate_errors(x, nstddvs)\nes = model.estimate_errors_sampling(x, 100)\nel = model.estimate_errors_linear(x, 1)\ng = scipy.stats.logistic.cdf(np.dot(np.concatenate((x,np.ones((x.shape[0],1))),axis=1), p_true))\nax[1].plot(u, g, '-', color='black', alpha=1, label=\"true curve\")\nax[1].plot(u, f, '-', color='red', label=\"fitted curve\")\nax[1].fill_between(x=u.ravel(), y1=f-el1, y2=f+eu1, alpha=0.3, color='green', label=\"error\")\nax[1].fill_between(x=u.ravel(), y1=f-nstddvs*es, y2=f+nstddvs*es, alpha=0.3, color='orange', label=\"sampled error\")\nax[1].fill_between(x=u.ravel(), y1=f-nstddvs*el, y2=f+nstddvs*el, alpha=0.3, color='blue', label=\"linear error\")\nax[1].set_xlabel(\"x1\")\nax[1].set_ylabel(\"logistic prob\")\nax[1].legend()\n\nx = np.concatenate((a, a, u), axis=1)\nf = model.predict(x)\nel1, eu1 = model.estimate_errors(x, nstddvs)\nes = model.estimate_errors_sampling(x, 100)\nel = model.estimate_errors_linear(x, 1)\ng = scipy.stats.logistic.cdf(np.dot(np.concatenate((x,np.ones((x.shape[0],1))),axis=1), p_true))\nax[2].plot(u, g, '-', color='black', alpha=1, label=\"true curve\")\nax[2].plot(u, f, '-', color='red', label=\"fitted curve\")\nax[2].fill_between(x=u.ravel(), y1=f-el1, y2=f+eu1, alpha=0.3, color='green', label=\"error\")\nax[2].fill_between(x=u.ravel(), y1=f-nstddvs*es, y2=f+nstddvs*es, alpha=0.3, color='orange', label=\"sampled error\")\nax[2].fill_between(x=u.ravel(), y1=f-nstddvs*el, y2=f+nstddvs*el, alpha=0.3, color='blue', label=\"linear error\")\nax[2].set_xlabel(\"x2\")\nax[2].set_ylabel(\"logistic prob\")\nax[2].legend();",
"_____no_output_____"
]
],
[
[
"\n# Create report (2 ways)",
"_____no_output_____"
]
],
[
[
"features = ['x1', 'x2', 'x3', 'bias']\nwith PdfPages('Report.pdf') as pdf:\n errortools.errortools.report_correlation_matrix(model, features, pdf)\n errortools.errortools.report_parameter_error(model, features, pdf)\n errortools.errortools.report_loss_versus_approximation(model, X, y, 0, 0, features, pdf)\n errortools.report_error_indivial_pred(model, X[0], 'x1', features, 0, 20, 100, pdf)\n errortools.report_error_indivial_pred(model, X[0], 'x2', features, 0, 20, 100, pdf)\n errortools.report_model_positive_ratio(model, X, y, 1000, 10, pdf)\n errortools.report_error_test_samples(model, X, pdf)",
"_____no_output_____"
],
[
"pdf = errortools.errortools.report_correlation_matrix(model, features=features)\npdf = errortools.errortools.report_parameter_error(model, features, pdf)\npdf = errortools.errortools.report_loss_versus_approximation(model, X, y, 0, 0, features, pdf)\npdf = errortools.report_error_indivial_pred(model, X[0], 'x1', features, 0, 20, 100, pdf)\npdf = errortools.report_error_indivial_pred(model, X[0], 'x2', features, 0, 20, 100, pdf)\npdf = errortools.report_model_positive_ratio(model, X, y, 1000, 10, pdf)\npdf = errortools.report_error_test_samples(model, X, pdf)\npdf.close()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0d748fac4fcdc9388ba06d4d907c0fbc3c353f7 | 103,996 | ipynb | Jupyter Notebook | Python Basic.ipynb | fazlullahb0/python | e4819244972a8cb1ee4ec9a50420a18087d6e980 | [
"MIT"
] | 1 | 2022-02-19T12:03:09.000Z | 2022-02-19T12:03:09.000Z | Python Basic.ipynb | fazlullahb0/python | e4819244972a8cb1ee4ec9a50420a18087d6e980 | [
"MIT"
] | 1 | 2022-03-16T06:29:34.000Z | 2022-03-16T06:29:34.000Z | Python Basic.ipynb | fazlullahb0/python | e4819244972a8cb1ee4ec9a50420a18087d6e980 | [
"MIT"
] | null | null | null | 17.859523 | 630 | 0.438536 | [
[
[
"import this",
"The Zen of Python, by Tim Peters\n\nBeautiful is better than ugly.\nExplicit is better than implicit.\nSimple is better than complex.\nComplex is better than complicated.\nFlat is better than nested.\nSparse is better than dense.\nReadability counts.\nSpecial cases aren't special enough to break the rules.\nAlthough practicality beats purity.\nErrors should never pass silently.\nUnless explicitly silenced.\nIn the face of ambiguity, refuse the temptation to guess.\nThere should be one-- and preferably only one --obvious way to do it.\nAlthough that way may not be obvious at first unless you're Dutch.\nNow is better than never.\nAlthough never is often better than *right* now.\nIf the implementation is hard to explain, it's a bad idea.\nIf the implementation is easy to explain, it may be a good idea.\nNamespaces are one honking great idea -- let's do more of those!\n"
],
[
"print(\"this is my first program. \")",
"this is my first program. \n"
],
[
"len(\"fazlullah\")",
"_____no_output_____"
],
[
"a = 10",
"_____no_output_____"
],
[
"a",
"_____no_output_____"
],
[
"type(a)",
"_____no_output_____"
],
[
"b = 45.5",
"_____no_output_____"
],
[
"type(b)",
"_____no_output_____"
],
[
"c = \"fazlullah\"",
"_____no_output_____"
],
[
"type(c)",
"_____no_output_____"
],
[
"d = 5+6j",
"_____no_output_____"
],
[
"type(d)",
"_____no_output_____"
],
[
"g = True",
"_____no_output_____"
],
[
"type(g)",
"_____no_output_____"
],
[
"*a = 67",
"_____no_output_____"
],
[
"_a = 88",
"_____no_output_____"
],
[
"type(a)",
"_____no_output_____"
],
[
"a = 34",
"_____no_output_____"
],
[
"type(_a)",
"_____no_output_____"
],
[
"a, b, c, d, e = 124,\"fazlullah\",6+8j,False,88.2",
"_____no_output_____"
],
[
"a",
"_____no_output_____"
],
[
"b",
"_____no_output_____"
],
[
"c",
"_____no_output_____"
],
[
"a = \"sudh\"",
"_____no_output_____"
],
[
"a+str(4)",
"_____no_output_____"
],
[
"True + True",
"_____no_output_____"
],
[
"True - False",
"_____no_output_____"
],
[
"1 + True",
"_____no_output_____"
],
[
"a = input()",
"faiz\n"
],
[
"a",
"_____no_output_____"
],
[
"a = input()",
"12\n"
],
[
"int(a)+8",
"_____no_output_____"
],
[
"a",
"_____no_output_____"
],
[
"pwd",
"_____no_output_____"
],
[
"c",
"_____no_output_____"
],
[
"c.conjugate()",
"_____no_output_____"
],
[
"c.imag",
"_____no_output_____"
],
[
"c.real",
"_____no_output_____"
],
[
"s = \"sudh\"",
"_____no_output_____"
],
[
"s[1]",
"_____no_output_____"
],
[
"s[2]",
"_____no_output_____"
],
[
"s[3]",
"_____no_output_____"
],
[
"s[4]",
"_____no_output_____"
],
[
"s[100]",
"_____no_output_____"
],
[
"s[-1]",
"_____no_output_____"
],
[
"a = \"my name is sudh\"",
"_____no_output_____"
],
[
"a[:10]",
"_____no_output_____"
],
[
"b = \"ineuron\"",
"_____no_output_____"
],
[
"b[:3]",
"_____no_output_____"
],
[
"b[:300]",
"_____no_output_____"
],
[
"b[300]",
"_____no_output_____"
],
[
"b[-1]",
"_____no_output_____"
],
[
"b[-100]",
"_____no_output_____"
],
[
"b[-1:-4]",
"_____no_output_____"
],
[
"b[0:4]",
"_____no_output_____"
],
[
"a = 'kumar'",
"_____no_output_____"
],
[
"a[0:300]",
"_____no_output_____"
],
[
"a[0:300:1]",
"_____no_output_____"
],
[
"a[0:300:2]",
"_____no_output_____"
],
[
"a[0:300:3]",
"_____no_output_____"
],
[
"a[0:100:-1]",
"_____no_output_____"
],
[
"a[-1:-4]",
"_____no_output_____"
],
[
"a[-1:-4:-1]",
"_____no_output_____"
],
[
"a[-1:-10:-1]",
"_____no_output_____"
],
[
"a[0:-10:-1]",
"_____no_output_____"
],
[
"a[::]",
"_____no_output_____"
],
[
"a[-2:]",
"_____no_output_____"
],
[
"a[-2:-1]",
"_____no_output_____"
],
[
"a[::-1]",
"_____no_output_____"
],
[
"a[-1::-1]",
"_____no_output_____"
],
[
"a = \"I am working with ineuron\"",
"_____no_output_____"
],
[
"a",
"_____no_output_____"
],
[
"a[::-1]",
"_____no_output_____"
],
[
"a[-5:5]",
"_____no_output_____"
],
[
"a[-5:5:-1]",
"_____no_output_____"
],
[
"a[-2:-10:-1]",
"_____no_output_____"
],
[
"\"sudh\"*3",
"_____no_output_____"
],
[
"\"sudh\" + \" kumar\"",
"_____no_output_____"
],
[
"a",
"_____no_output_____"
],
[
"len(a)",
"_____no_output_____"
],
[
"a.find('a')",
"_____no_output_____"
],
[
"a.find('i')",
"_____no_output_____"
],
[
"a.find('ia')",
"_____no_output_____"
],
[
"a.find('in')",
"_____no_output_____"
],
[
"a.count('i')",
"_____no_output_____"
],
[
"a",
"_____no_output_____"
],
[
"a.count('x')",
"_____no_output_____"
],
[
"l = a.split()",
"_____no_output_____"
],
[
"l[0]",
"_____no_output_____"
],
[
"l[1]",
"_____no_output_____"
],
[
"l[2]",
"_____no_output_____"
],
[
"l[0:3]",
"_____no_output_____"
],
[
"a.split('w')",
"_____no_output_____"
],
[
"a.split('wo')",
"_____no_output_____"
],
[
"a.upper()",
"_____no_output_____"
],
[
"s = \"sUdh\"",
"_____no_output_____"
],
[
"s.swapcase()",
"_____no_output_____"
],
[
"s.title()",
"_____no_output_____"
],
[
"s.capitalize()",
"_____no_output_____"
],
[
"b = \"sudh\"\nc = \"ineuron\"",
"_____no_output_____"
],
[
"b.join(c)",
"_____no_output_____"
],
[
"\" \".join(\"sudh\")",
"_____no_output_____"
],
[
"for i in reversed(\"sudh\"):\n print(i)",
"h\nd\nu\ns\n"
],
[
"s = \" sudh \"\ns[::-1]",
"_____no_output_____"
],
[
"s.rstrip()",
"_____no_output_____"
],
[
"s.lstrip()",
"_____no_output_____"
],
[
"s.strip()",
"_____no_output_____"
],
[
"s = \"sudh\"",
"_____no_output_____"
],
[
"s.replace(\"u\", \"xyz\")",
"_____no_output_____"
],
[
"s.replace(\"t\",\"xyz\")",
"_____no_output_____"
],
[
"\"sudh\\tkumar\".expandtabs()",
"_____no_output_____"
],
[
"s.center(40,'t')",
"_____no_output_____"
],
[
"s.isupper()",
"_____no_output_____"
],
[
"s = \"Sudh\"\ns.isupper()",
"_____no_output_____"
],
[
"s = \"SUDH\"\ns.isupper()",
"_____no_output_____"
],
[
"s.islower()",
"_____no_output_____"
],
[
"s.isspace()",
"_____no_output_____"
],
[
"s = \" sudh\"\ns.isspace()",
"_____no_output_____"
],
[
"s = \" \"\ns.isspace()",
"_____no_output_____"
],
[
"s = \"sudh\"\ns.isdigit()",
"_____no_output_____"
],
[
"s = \"456321\"\ns.isdigit()",
"_____no_output_____"
],
[
"s = \"sudh\"\ns.endswith('h')",
"_____no_output_____"
],
[
"s.endswith('x')",
"_____no_output_____"
],
[
"s.startswith('s')",
"_____no_output_____"
],
[
"s.istitle()",
"_____no_output_____"
],
[
"s.encode()",
"_____no_output_____"
],
[
"l = [\"sudh\", \"kumar\",3456,4+9j, True, 354.25]",
"_____no_output_____"
],
[
"type(l)",
"_____no_output_____"
],
[
"l[0]",
"_____no_output_____"
],
[
"l[-1]",
"_____no_output_____"
],
[
"l[-5]",
"_____no_output_____"
],
[
"l[0:4]",
"_____no_output_____"
],
[
"l[::-1]",
"_____no_output_____"
],
[
"l[-1:6]",
"_____no_output_____"
],
[
"l[0]",
"_____no_output_____"
],
[
"l[0][1]",
"_____no_output_____"
],
[
"l[3].real",
"_____no_output_____"
],
[
"l1 = [\"sudh\", \"kumar\",4587]\nl2 = [\"xyz\",\"pqr\",456.25]",
"_____no_output_____"
],
[
"l1+l2",
"_____no_output_____"
],
[
"l1 + [\"sudh\"]",
"_____no_output_____"
],
[
"l1*4",
"_____no_output_____"
],
[
"l1*2",
"_____no_output_____"
],
[
"l1",
"_____no_output_____"
],
[
"l3 = l1[0].replace(\"sudh\",\"Faiz\")\nl3",
"_____no_output_____"
],
[
"l3",
"_____no_output_____"
],
[
"l1[0] = \"Faiz\"",
"_____no_output_____"
],
[
"l1",
"_____no_output_____"
],
[
"l4 = l[1].replace('k','s')",
"_____no_output_____"
],
[
"l4",
"_____no_output_____"
],
[
"l1",
"_____no_output_____"
],
[
"len(l1)",
"_____no_output_____"
],
[
"32547 in l1",
"_____no_output_____"
],
[
"l2",
"_____no_output_____"
],
[
"l2.append(\"sudh\")",
"_____no_output_____"
],
[
"l2",
"_____no_output_____"
],
[
"l2.pop()",
"_____no_output_____"
],
[
"l2.pop(2)",
"_____no_output_____"
],
[
"l2",
"_____no_output_____"
],
[
"l2.append(345687)",
"_____no_output_____"
],
[
"l2.insert(1,\"faiz\")",
"_____no_output_____"
],
[
"l2",
"_____no_output_____"
],
[
"l2.insert(3,[325,'bukhari',\"kumar\"])",
"_____no_output_____"
],
[
"l2",
"_____no_output_____"
],
[
"l2[::-1]",
"_____no_output_____"
],
[
"l2.reverse()",
"_____no_output_____"
],
[
"l2",
"_____no_output_____"
],
[
"l1",
"_____no_output_____"
],
[
"l2",
"_____no_output_____"
],
[
"l2[1][2]",
"_____no_output_____"
],
[
"l2",
"_____no_output_____"
],
[
"l2.count('xyz')",
"_____no_output_____"
],
[
"l2.append(\"munger\")",
"_____no_output_____"
],
[
"l2.append([3,4,54,6])",
"_____no_output_____"
],
[
"l2",
"_____no_output_____"
],
[
"l1",
"_____no_output_____"
],
[
"l1.extend(['faiz',3548,2.25,True])",
"_____no_output_____"
],
[
"l1",
"_____no_output_____"
],
[
"#3:03/4",
"_____no_output_____"
],
[
"#Tuples",
"_____no_output_____"
],
[
"t = (1,2,3,4,5)",
"_____no_output_____"
],
[
"type(t)",
"_____no_output_____"
],
[
"t1 = (\"sudh\",345,45+6j, 45.50, True)",
"_____no_output_____"
],
[
"l = [\"sudh\",345,45+6j, 45.50, True]",
"_____no_output_____"
],
[
"type(t1)",
"_____no_output_____"
],
[
"type(l)",
"_____no_output_____"
],
[
"t2 = ()",
"_____no_output_____"
],
[
"type(t2)",
"_____no_output_____"
],
[
"t1",
"_____no_output_____"
],
[
"l",
"_____no_output_____"
],
[
"l[0:2]",
"_____no_output_____"
],
[
"t1[0:2]",
"_____no_output_____"
],
[
"t1[::-1]",
"_____no_output_____"
],
[
"t1[-1]",
"_____no_output_____"
],
[
"t1",
"_____no_output_____"
],
[
"t1[0::2]",
"_____no_output_____"
],
[
"l",
"_____no_output_____"
],
[
"l1 = [4,5,6,7]",
"_____no_output_____"
],
[
"l1",
"_____no_output_____"
],
[
"l[0] = \"kumar\"",
"_____no_output_____"
],
[
"l",
"_____no_output_____"
],
[
"t1",
"_____no_output_____"
],
[
"t1[0] = \"xyz\"",
"_____no_output_____"
],
[
"t1",
"_____no_output_____"
],
[
"t2 = (34,56,56.5,654)",
"_____no_output_____"
],
[
"t1+t2",
"_____no_output_____"
],
[
"l+l1",
"_____no_output_____"
],
[
"t1",
"_____no_output_____"
],
[
"t1*2",
"_____no_output_____"
],
[
"t1.count(\"sudh\")",
"_____no_output_____"
],
[
"t1.index(\"sudh\")",
"_____no_output_____"
],
[
"t = (45,456,23.5,(\"sudh\",4,5,6),(\"sudh\"))",
"_____no_output_____"
],
[
"t",
"_____no_output_____"
],
[
"t1 = ([1,2,30],(\"sudh\",456,23.5,45),\"sudh\")",
"_____no_output_____"
],
[
"t1",
"_____no_output_____"
],
[
"t1[0][1] = \"faiz\"",
"_____no_output_____"
],
[
"t1[0][1]",
"_____no_output_____"
],
[
"t1",
"_____no_output_____"
],
[
"t1[0] = \"faiz\"",
"_____no_output_____"
],
[
"list(t1)",
"_____no_output_____"
],
[
"l",
"_____no_output_____"
],
[
"tuple(l)",
"_____no_output_____"
],
[
"# set",
"_____no_output_____"
],
[
"l = [1,2,3,4,5,6,5,5,55,6,4,7,8,6,4,5,55,5,5,5]",
"_____no_output_____"
],
[
"set(l)",
"_____no_output_____"
],
[
"s = {}",
"_____no_output_____"
],
[
"type(s)",
"_____no_output_____"
],
[
"s1 = {1,2,3,4}",
"_____no_output_____"
],
[
"type(s1)",
"_____no_output_____"
],
[
"s2 = {1,11,2,3,1,1,2,3,4,4,4,5,6,6,6,2,0,1,1,4,2,2,1,1}",
"_____no_output_____"
],
[
"s2",
"_____no_output_____"
],
[
"s2[0]",
"_____no_output_____"
],
[
"list(s2)",
"_____no_output_____"
],
[
"s2",
"_____no_output_____"
],
[
"s2.add(1234)",
"_____no_output_____"
],
[
"s2",
"_____no_output_____"
],
[
"s2.add(\"faiz\")",
"_____no_output_____"
],
[
"s2",
"_____no_output_____"
],
[
"s2.add([1,2,3,4])",
"_____no_output_____"
],
[
"{[3,4,5],3,45,56,4}",
"_____no_output_____"
],
[
"{(3,4,5),3,45,56,4}",
"_____no_output_____"
],
[
"s = {(3, 4, 5),(3, 4, 5), 3, 4, 45, 56, 3, 4, 45, 56}",
"_____no_output_____"
],
[
"s",
"_____no_output_____"
],
[
"s.remove(4)",
"_____no_output_____"
],
[
"s",
"_____no_output_____"
],
[
"s.discard(45)",
"_____no_output_____"
],
[
"s",
"_____no_output_____"
],
[
"s.discard(45)",
"_____no_output_____"
],
[
"s",
"_____no_output_____"
],
[
"s.remove(4)",
"_____no_output_____"
],
[
"# set is neithe a mutable nor imutable ",
"_____no_output_____"
],
[
"{\"Faiz\",\"faiz\"}",
"_____no_output_____"
],
[
"{\"sudh\",\"Sudh\"}",
"_____no_output_____"
],
[
"s = {1,2,3,4,4,5,1,2,1,2,3,2,4,\"faiz\",\"faiz\"}\ns",
"_____no_output_____"
],
[
"#dictionary",
"_____no_output_____"
],
[
"d = {}",
"_____no_output_____"
],
[
"type(d)",
"_____no_output_____"
],
[
"d = {1,5}",
"_____no_output_____"
],
[
"type(d)",
"_____no_output_____"
],
[
"d = {4:\"sudh\"}",
"_____no_output_____"
],
[
"d1 = {\"key1\":4554,\"key2\":\"sudh\",45:[3,4,5,6,8]}",
"_____no_output_____"
],
[
"d1",
"_____no_output_____"
],
[
"d1[\"key1\"]",
"_____no_output_____"
],
[
"d1[45]",
"_____no_output_____"
],
[
"d = {3:[\"sudh\",'faiz',4,5,6,4]}",
"_____no_output_____"
],
[
"d[3]",
"_____no_output_____"
],
[
"d = {_4:[\"sudh\",'faiz',4,5,6,4]}",
"_____no_output_____"
],
[
"d = {.4:[\"sudh\",'faiz',4,5,6,4]}",
"_____no_output_____"
],
[
"d = {\"key\":(\"sudh\",'faiz',4,5,6,4)}",
"_____no_output_____"
],
[
"d = {\"key\":{\"sudh\",'faiz',4,5,6,4}}",
"_____no_output_____"
],
[
"d1 = {\"key1\":[2,3,4,5],\"key2\":\"sudh\",\"key1\":45}",
"_____no_output_____"
],
[
"d1[\"key1\"]",
"_____no_output_____"
],
[
"d1",
"_____no_output_____"
],
[
"d1 = {\"key1\":[2,3,4,5],\"key2\":\"sudh\",\"kumar\":45}",
"_____no_output_____"
],
[
"d",
"_____no_output_____"
],
[
"d = {\"name\":\"sudhanshu\",\"mo_no\":9873300865,\"mail_id\":\"[email protected]\",\"key1\":[4,5,6,7],\"key2\":(3,4,5,6),\n \"key3\":{4,7,8,5,78,5,4,5},\"key4\":{1:5,5:6}}",
"_____no_output_____"
],
[
"d",
"_____no_output_____"
],
[
"d[\"key3\"]",
"_____no_output_____"
],
[
"type(d[\"key3\"])",
"_____no_output_____"
],
[
"d[\"key4\"]",
"_____no_output_____"
],
[
"d[\"key4\"][5]",
"_____no_output_____"
],
[
"d.keys()",
"_____no_output_____"
],
[
"d.values()",
"_____no_output_____"
],
[
"d.items()",
"_____no_output_____"
],
[
"type(d.items())",
"_____no_output_____"
],
[
"d = {\"key1\":\"sudh\",\"key2\":[1,2,3,4]}",
"_____no_output_____"
],
[
"d",
"_____no_output_____"
],
[
"d[\"key3\"] = \"kumar\"",
"_____no_output_____"
],
[
"d",
"_____no_output_____"
],
[
"d[4] = [2,5,4,8,6]",
"_____no_output_____"
],
[
"d",
"_____no_output_____"
],
[
"d[\"key1\"] = \"Fazlullah\"",
"_____no_output_____"
],
[
"d",
"_____no_output_____"
],
[
"del d[\"key1\"]",
"_____no_output_____"
],
[
"d",
"_____no_output_____"
],
[
"del d",
"_____no_output_____"
],
[
"d",
"_____no_output_____"
],
[
"d1 = {\"key1\":\"sudh\",\"key2\":[4,5,67,8]}",
"_____no_output_____"
],
[
"d1[[1,2,3]] = \"ineuron\"",
"_____no_output_____"
],
[
"d1",
"_____no_output_____"
],
[
"d1[(1,2,3)] = \"ineuron\"",
"_____no_output_____"
],
[
"d1",
"_____no_output_____"
],
[
"d1.get(\"key1\")",
"_____no_output_____"
],
[
"d1 = {\"key1\":\"ineuron\",\"key\":\"FSDS\"}",
"_____no_output_____"
],
[
"d2 = {\"key2\":456,\"key3\":[1,2,3,4,5]}",
"_____no_output_____"
],
[
"d1.update(d2)",
"_____no_output_____"
],
[
"d1",
"_____no_output_____"
],
[
"d2",
"_____no_output_____"
],
[
"d1+d2",
"_____no_output_____"
],
[
"t1 = (\"faiz\",1,1+5j,True)\nt1.index(True)",
"_____no_output_____"
],
[
"set(t1)",
"_____no_output_____"
],
[
"d1",
"_____no_output_____"
],
[
"key = (\"name\",\"mobile_no\",\"email_id\")\nvalue = \"sudh\"",
"_____no_output_____"
],
[
"d = d1.fromkeys(key,value)",
"_____no_output_____"
],
[
"d",
"_____no_output_____"
],
[
"#3:20/5",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0d7496ff23201b35feda74c70937f1872fad5e3 | 100,104 | ipynb | Jupyter Notebook | docs/examples/2-Montecarlo/1-ProbVar.ipynb | scuervo91/volumetricspy | f2ed85628400f22e0dc5ee3871c06851f7bb0264 | [
"MIT"
] | null | null | null | docs/examples/2-Montecarlo/1-ProbVar.ipynb | scuervo91/volumetricspy | f2ed85628400f22e0dc5ee3871c06851f7bb0264 | [
"MIT"
] | null | null | null | docs/examples/2-Montecarlo/1-ProbVar.ipynb | scuervo91/volumetricspy | f2ed85628400f22e0dc5ee3871c06851f7bb0264 | [
"MIT"
] | null | null | null | 186.413408 | 20,515 | 0.58647 | [
[
[
"# Probabilistic Variables",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom volumetricspy import ProbVar, MonteCarlo",
"_____no_output_____"
]
],
[
[
"## Constant Values",
"_____no_output_____"
]
],
[
[
"x = ProbVar(name='test1',constant=5.)\nx.get_sample()",
"_____no_output_____"
],
[
"y = ProbVar(\n name = 'test',\n dist = 'lognorm', \n kw = {'s':0.42, 'scale':500})\n\nsns.histplot(y.get_sample(size=100))",
"_____no_output_____"
],
[
"def ooip(a,h,bo,phi,sw):\n return 7758*a*h*phi*(1-sw)*(1/bo)*(1/1000000)",
"_____no_output_____"
],
[
"a = ProbVar(name = 'area',dist='norm',kw={'loc':1000,'scale':200})\nh = ProbVar(name = 'height',dist='uniform',kw={'loc':180,'scale':70})\nbo = ProbVar(name = 'bo',constant=1.12)\nphi = ProbVar(name = 'phi',dist='norm',kw={'loc':0.18,'scale':0.04})\nsw = ProbVar(name = 'sw',dist='norm',kw={'loc':0.4,'scale':0.1})",
"_____no_output_____"
],
[
"OOIP = MonteCarlo(\n name = 'OOIP',\n func = ooip,\n args = [a,h,bo,phi,sw],\n)",
"_____no_output_____"
],
[
"ss = OOIP.get_sample(size=250)\n\nsns.histplot(ss)",
"_____no_output_____"
],
[
"ooip_df = OOIP.get_sample_df(size=10)\nooip_df",
"_____no_output_____"
],
[
"sns.histplot(ooip_df['area'])",
"_____no_output_____"
],
[
"OOIP.get_sample_df(ppf=np.linspace(0.05,0.95,10))",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0d749dac8830a4d97c94d9b0fcb2e353102b411 | 1,269 | ipynb | Jupyter Notebook | 00-index.ipynb | jmencisom/nb-mn | 799550d5b6524c36ff339067e233c4508b964866 | [
"MIT"
] | 2 | 2018-04-20T13:54:38.000Z | 2018-06-09T01:43:45.000Z | 00-index.ipynb | jmencisom/nb-mn | 799550d5b6524c36ff339067e233c4508b964866 | [
"MIT"
] | 5 | 2018-03-02T13:08:32.000Z | 2018-03-05T16:02:19.000Z | 00-index.ipynb | jmencisom/nb-mn | 799550d5b6524c36ff339067e233c4508b964866 | [
"MIT"
] | null | null | null | 20.803279 | 59 | 0.537431 | [
[
[
"# Métodos Numéricos",
"_____no_output_____"
],
[
"## Contenido",
"_____no_output_____"
],
[
"* [01 - Representación](01-representacion.ipynb)\n* [02 - Series de Taylor](02-taylor.ipynb)\n* [03 - Método de bisección](03-biseccion.ipynb)\n* [04 - Mínimos cuadrados](04-min-cuadrados.ipynb)\n* [05 - Splines](05-splines.ipynb)\n* [06 - Método del trapecio](06-trapecio.ipynb)\n* [07 - Diferenciación](07-diferenciacion.ipynb)\n* [08 - Jacobi-Gauss](08-jacobi-gauss.ipynb)\n",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
]
] |
d0d787453cd7cd9f86d4c22596a9560194d74e99 | 1,263 | ipynb | Jupyter Notebook | Week 1/HW1_1.ipynb | drkndl/PH354-IISc | e1b40a1ed11fb1967cfb5204d81ee237df453d39 | [
"MIT"
] | null | null | null | Week 1/HW1_1.ipynb | drkndl/PH354-IISc | e1b40a1ed11fb1967cfb5204d81ee237df453d39 | [
"MIT"
] | null | null | null | Week 1/HW1_1.ipynb | drkndl/PH354-IISc | e1b40a1ed11fb1967cfb5204d81ee237df453d39 | [
"MIT"
] | null | null | null | 16.84 | 43 | 0.479018 | [
[
[
"\"\"\"\nCreated on Wed 24th Feb 2021 16:26\n@author: Drishika Nadella\n\"\"\"",
"_____no_output_____"
],
[
"import numpy as np",
"_____no_output_____"
],
[
"def FallTime(h):\n g = 9.81\n return np.sqrt(2*h/g)",
"_____no_output_____"
],
[
"print(FallTime(100))",
"4.515236409857309\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
d0d79af850c10fb43f6c670ea8fa98678b71d934 | 494,073 | ipynb | Jupyter Notebook | sagemaker-deployment/Tutorials/Boston Housing - Updating an Endpoint.ipynb | dilayercelik/MLND-learning | ba3d22bf84baa2e4f358462c8ef8f89de04d0f11 | [
"MIT"
] | null | null | null | sagemaker-deployment/Tutorials/Boston Housing - Updating an Endpoint.ipynb | dilayercelik/MLND-learning | ba3d22bf84baa2e4f358462c8ef8f89de04d0f11 | [
"MIT"
] | null | null | null | sagemaker-deployment/Tutorials/Boston Housing - Updating an Endpoint.ipynb | dilayercelik/MLND-learning | ba3d22bf84baa2e4f358462c8ef8f89de04d0f11 | [
"MIT"
] | null | null | null | 134.258967 | 1,528 | 0.597873 | [
[
[
"# Predicting Boston Housing Prices\n\n## Updating a model using SageMaker\n\n_Deep Learning Nanodegree Program | Deployment_\n\n---\n\nIn this notebook, we will continue working with the [Boston Housing Dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html). Our goal in this notebook will be to train two different models and to use SageMaker to switch a deployed endpoint from using one model to the other. One of the benefits of using SageMaker to do this is that we can make the change without interrupting service. What this means is that we can continue sending data to the endpoint and at no point will that endpoint disappear.\n\n## General Outline\n\nTypically, when using a notebook instance with SageMaker, you will proceed through the following steps. Of course, not every step will need to be done with each project. Also, there is quite a lot of room for variation in many of the steps, as you will see throughout these lessons.\n\n1. Download or otherwise retrieve the data.\n2. Process / Prepare the data.\n3. Upload the processed data to S3.\n4. Train a chosen model.\n5. Test the trained model (typically using a batch transform job).\n6. Deploy the trained model.\n7. Use the deployed model.\n\nIn this notebook we will be skipping step 5, testing the model. In addition, we will perform steps 4, 6 and 7 multiple times with different models.",
"_____no_output_____"
],
[
"## Step 0: Setting up the notebook\n\nWe begin by setting up all of the necessary bits required to run our notebook. To start that means loading all of the Python modules we will need.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nimport os\n\nimport numpy as np\nimport pandas as pd\n\nfrom pprint import pprint\nimport matplotlib.pyplot as plt\nfrom time import gmtime, strftime\n\nfrom sklearn.datasets import load_boston\nimport sklearn.model_selection",
"_____no_output_____"
]
],
[
[
"In addition to the modules above, we need to import the various bits of SageMaker that we will be using. ",
"_____no_output_____"
]
],
[
[
"import sagemaker\nfrom sagemaker import get_execution_role\nfrom sagemaker.amazon.amazon_estimator import get_image_uri\nfrom sagemaker.predictor import csv_serializer\n\n# This is an object that represents the SageMaker session that we are currently operating in. This\n# object contains some useful information that we will need to access later such as our region.\nsession = sagemaker.Session()\n\n# This is an object that represents the IAM role that we are currently assigned. When we construct\n# and launch the training job later we will need to tell it what IAM role it should have. Since our\n# use case is relatively simple we will simply assign the training job the role we currently have.\nrole = get_execution_role()",
"_____no_output_____"
]
],
[
[
"## Step 1: Downloading the data\n\nFortunately, this dataset can be retrieved using sklearn and so this step is relatively straightforward.",
"_____no_output_____"
]
],
[
[
"boston = load_boston()",
"_____no_output_____"
]
],
[
[
"## Step 2: Preparing and splitting the data\n\nGiven that this is clean tabular data, we don't need to do any processing. However, we do need to split the rows in the dataset up into train, test and validation sets.",
"_____no_output_____"
]
],
[
[
"# First we package up the input data and the target variable (the median value) as pandas dataframes. This\n# will make saving the data to a file a little easier later on.\n\nX_bos_pd = pd.DataFrame(boston.data, columns=boston.feature_names)\nY_bos_pd = pd.DataFrame(boston.target)\n\n# We split the dataset into 2/3 training and 1/3 testing sets.\nX_train, X_test, Y_train, Y_test = sklearn.model_selection.train_test_split(X_bos_pd, Y_bos_pd, test_size=0.33)\n\n# Then we split the training set further into 2/3 training and 1/3 validation sets.\nX_train, X_val, Y_train, Y_val = sklearn.model_selection.train_test_split(X_train, Y_train, test_size=0.33)",
"_____no_output_____"
]
],
[
[
"## Step 3: Uploading the training and validation files to S3\n\nWhen a training job is constructed using SageMaker, a container is executed which performs the training operation. This container is given access to data that is stored in S3. This means that we need to upload the data we want to use for training to S3. We can use the SageMaker API to do this and hide some of the details.\n\n### Save the data locally\n\nFirst we need to create the train and validation csv files which we will then upload to S3.",
"_____no_output_____"
]
],
[
[
"# This is our local data directory. We need to make sure that it exists.\ndata_dir = '../data/boston'\nif not os.path.exists(data_dir):\n os.makedirs(data_dir)",
"_____no_output_____"
],
[
"# We use pandas to save our train and validation data to csv files. Note that we make sure not to include header\n# information or an index as this is required by the built in algorithms provided by Amazon. Also, it is assumed\n# that the first entry in each row is the target variable.\n\npd.concat([Y_val, X_val], axis=1).to_csv(os.path.join(data_dir, 'validation.csv'), header=False, index=False)\npd.concat([Y_train, X_train], axis=1).to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)",
"_____no_output_____"
]
],
[
[
"### Upload to S3\n\nSince we are currently running inside of a SageMaker session, we can use the object which represents this session to upload our data to the 'default' S3 bucket. Note that it is good practice to provide a custom prefix (essentially an S3 folder) to make sure that you don't accidentally interfere with data uploaded from some other notebook or project.",
"_____no_output_____"
]
],
[
[
"prefix = 'boston-update-endpoints'\n\nval_location = session.upload_data(os.path.join(data_dir, 'validation.csv'), key_prefix=prefix)\ntrain_location = session.upload_data(os.path.join(data_dir, 'train.csv'), key_prefix=prefix)",
"_____no_output_____"
]
],
[
[
"## Step 4 (A): Train the XGBoost model\n\nNow that we have the training and validation data uploaded to S3, we can construct our XGBoost model and train it. We will be making use of the high level SageMaker API to do this which will make the resulting code a little easier to read at the cost of some flexibility.\n\nTo construct an estimator, the object which we wish to train, we need to provide the location of a container which contains the training code. Since we are using a built in algorithm this container is provided by Amazon. However, the full name of the container is a bit lengthy and depends on the region that we are operating in. Fortunately, SageMaker provides a useful utility method called `get_image_uri` that constructs the image name for us.\n\nTo use the `get_image_uri` method we need to provide it with our current region, which can be obtained from the session object, and the name of the algorithm we wish to use. In this notebook we will be using XGBoost however you could try another algorithm if you wish. The list of built in algorithms can be found in the list of [Common Parameters](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html).",
"_____no_output_____"
]
],
[
[
"# As stated above, we use this utility method to construct the image name for the training container.\nxgb_container = get_image_uri(session.boto_region_name, 'xgboost')\n\n# Now that we know which container to use, we can construct the estimator object.\nxgb = sagemaker.estimator.Estimator(xgb_container, # The name of the training container\n role, # The IAM role to use (our current role in this case)\n train_instance_count=1, # The number of instances to use for training\n train_instance_type='ml.m4.xlarge', # The type of instance ot use for training\n output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),\n # Where to save the output (the model artifacts)\n sagemaker_session=session) # The current SageMaker session",
"'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.\nThere is a more up to date SageMaker XGBoost image. To use the newer image, please set 'repo_version'='1.0-1'. For example:\n\tget_image_uri(region, 'xgboost', '1.0-1').\nParameter image_name will be renamed to image_uri in SageMaker Python SDK v2.\n"
]
],
[
[
"Before asking SageMaker to begin the training job, we should probably set any model specific hyperparameters. There are quite a few that can be set when using the XGBoost algorithm, below are just a few of them. If you would like to change the hyperparameters below or modify additional ones you can find additional information on the [XGBoost hyperparameter page](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost_hyperparameters.html)",
"_____no_output_____"
]
],
[
[
"xgb.set_hyperparameters(max_depth=5,\n eta=0.2,\n gamma=4,\n min_child_weight=6,\n subsample=0.8,\n objective='reg:linear',\n early_stopping_rounds=10,\n num_round=200)",
"_____no_output_____"
]
],
[
[
"Now that we have our estimator object completely set up, it is time to train it. To do this we make sure that SageMaker knows our input data is in csv format and then execute the `fit` method.",
"_____no_output_____"
]
],
[
[
"# This is a wrapper around the location of our train and validation data, to make sure that SageMaker\n# knows our data is in csv format.\ns3_input_train = sagemaker.s3_input(s3_data=train_location, content_type='text/csv')\ns3_input_validation = sagemaker.s3_input(s3_data=val_location, content_type='text/csv')\n\nxgb.fit({'train': s3_input_train, 'validation': s3_input_validation})",
"'s3_input' class will be renamed to 'TrainingInput' in SageMaker Python SDK v2.\n's3_input' class will be renamed to 'TrainingInput' in SageMaker Python SDK v2.\n"
]
],
[
[
"## Step 5: Test the trained model\n\nWe will be skipping this step for now.\n\n\n## Step 6 (A): Deploy the trained model\n\nEven though we used the high level approach to construct and train the XGBoost model, we will be using the lower level approach to deploy it. One of the reasons for this is so that we have additional control over how the endpoint is constructed. This will be a little more clear later on when we construct more advanced endpoints.\n\n### Build the model\n\nOf course, before we can deploy the model, we need to first create it. The `fit` method that we used earlier created some model artifacts and we can use these to construct a model object.",
"_____no_output_____"
]
],
[
[
"# Remember that a model needs to have a unique name\nxgb_model_name = \"boston-update-xgboost-model\" + strftime(\"%Y-%m-%d-%H-%M-%S\", gmtime())\n\n# We also need to tell SageMaker which container should be used for inference and where it should\n# retrieve the model artifacts from. In our case, the xgboost container that we used for training\n# can also be used for inference and the model artifacts come from the previous call to fit.\nxgb_primary_container = {\n \"Image\": xgb_container,\n \"ModelDataUrl\": xgb.model_data # model artifacts created earlier in xgb.fit(...)\n}\n\n# And lastly we construct the SageMaker model\nxgb_model_info = session.sagemaker_client.create_model(\n ModelName = xgb_model_name,\n ExecutionRoleArn = role,\n PrimaryContainer = xgb_primary_container)",
"_____no_output_____"
]
],
[
[
"### Create the endpoint configuration\n\nOnce we have a model we can start putting together the endpoint. Recall that to do this we need to first create an endpoint configuration, essentially the blueprint that SageMaker will use to build the endpoint itself.",
"_____no_output_____"
]
],
[
[
"# As before, we need to give our endpoint configuration a name which should be unique\nxgb_endpoint_config_name = \"boston-update-xgboost-endpoint-config-\" + strftime(\"%Y-%m-%d-%H-%M-%S\", gmtime())\n\n# And then we ask SageMaker to construct the endpoint configuration\nxgb_endpoint_config_info = session.sagemaker_client.create_endpoint_config(\n EndpointConfigName = xgb_endpoint_config_name,\n ProductionVariants = [{\n \"InstanceType\": \"ml.m4.xlarge\",\n \"InitialVariantWeight\": 1,\n \"InitialInstanceCount\": 1,\n \"ModelName\": xgb_model_name,\n \"VariantName\": \"XGB-Model\"\n }])",
"_____no_output_____"
]
],
[
[
"### Deploy the endpoint\n\nNow that the endpoint configuration has been created, we can ask SageMaker to build our endpoint.\n\n**Note:** This is a friendly (repeated) reminder that you are about to deploy an endpoint. Make sure that you shut it down once you've finished with it!",
"_____no_output_____"
]
],
[
[
"# Again, we need a unique name for our endpoint\nendpoint_name = \"boston-update-endpoint-\" + strftime(\"%Y-%m-%d-%H-%M-%S\", gmtime())\n\n# And then we can deploy our endpoint\nendpoint_info = session.sagemaker_client.create_endpoint(\n EndpointName = endpoint_name,\n EndpointConfigName = xgb_endpoint_config_name)",
"_____no_output_____"
],
[
"endpoint_dec = session.wait_for_endpoint(endpoint_name)",
"-------------!"
]
],
[
[
"## Step 7 (A): Use the model\n\nNow that our model is trained and deployed we can send some test data to it and evaluate the results.",
"_____no_output_____"
]
],
[
[
"response = session.sagemaker_runtime_client.invoke_endpoint(\n EndpointName = endpoint_name,\n ContentType = 'text/csv',\n Body = ','.join(map(str, X_test.values[0])))",
"_____no_output_____"
],
[
"pprint(response)",
"{'Body': <botocore.response.StreamingBody object at 0x7ffa3adbc4a8>,\n 'ContentType': 'text/csv; charset=utf-8',\n 'InvokedProductionVariant': 'XGB-Model',\n 'ResponseMetadata': {'HTTPHeaders': {'content-length': '13',\n 'content-type': 'text/csv; charset=utf-8',\n 'date': 'Wed, 5 Aug 2020 13:54:12 GMT',\n 'x-amzn-invoked-production-variant': 'XGB-Model',\n 'x-amzn-requestid': 'b8205bb3-f64e-4f6d-b05b-300cd9e72e41'},\n 'HTTPStatusCode': 200,\n 'RequestId': 'b8205bb3-f64e-4f6d-b05b-300cd9e72e41',\n 'RetryAttempts': 0}}\n"
],
[
"result = response['Body'].read().decode(\"utf-8\")",
"_____no_output_____"
],
[
"pprint(result)",
"'19.5732975006'\n"
],
[
"Y_test.values[0]",
"_____no_output_____"
]
],
[
[
"## Shut down the endpoint\n\nNow that we know that the XGBoost endpoint works, we can shut it down. We will make use of it again later.",
"_____no_output_____"
]
],
[
[
"session.sagemaker_client.delete_endpoint(EndpointName = endpoint_name)",
"_____no_output_____"
]
],
[
[
"## Step 4 (B): Train the Linear model\n\nSuppose we are working in an environment where the XGBoost model that we trained earlier is becoming too costly. Perhaps the number of calls to our endpoint has increased and the length of time it takes to perform inference with the XGBoost model is becoming problematic.\n\nA possible solution might be to train a simpler model to see if it performs nearly as well. In our case, we will construct a linear model. The process of doing this is the same as for creating the XGBoost model that we created earlier, although there are different hyperparameters that we need to set.",
"_____no_output_____"
]
],
[
[
"# Similar to the XGBoost model, we will use the utility method to construct the image name for the training container.\nlinear_container = get_image_uri(session.boto_region_name, 'linear-learner')\n\n# Now that we know which container to use, we can construct the estimator object.\nlinear = sagemaker.estimator.Estimator(linear_container, # The name of the training container\n role, # The IAM role to use (our current role in this case)\n train_instance_count=1, # The number of instances to use for training\n train_instance_type='ml.m4.xlarge', # The type of instance ot use for training\n output_path='s3://{}/{}/output'.format(session.default_bucket(), prefix),\n # Where to save the output (the model artifacts)\n sagemaker_session=session) # The current SageMaker session",
"'get_image_uri' method will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.\nParameter image_name will be renamed to image_uri in SageMaker Python SDK v2.\n"
]
],
[
[
"Before asking SageMaker to train our model, we need to set some hyperparameters. In this case we will be using a linear model so the number of hyperparameters we need to set is much fewer. For more details see the [Linear model hyperparameter page](https://docs.aws.amazon.com/sagemaker/latest/dg/ll_hyperparameters.html)",
"_____no_output_____"
]
],
[
[
"linear.set_hyperparameters(feature_dim=13, # Our data has 13 feature columns\n predictor_type='regressor', # We wish to create a regression model\n mini_batch_size=200) # Here we set how many samples to look at in each iteration",
"_____no_output_____"
]
],
[
[
"Now that the hyperparameters have been set, we can ask SageMaker to fit the linear model to our data.",
"_____no_output_____"
]
],
[
[
"linear.fit({'train': s3_input_train, 'validation': s3_input_validation})",
"2020-08-05 14:00:32 Starting - Starting the training job...\n2020-08-05 14:00:34 Starting - Launching requested ML instances......\n2020-08-05 14:01:37 Starting - Preparing the instances for training......\n2020-08-05 14:02:40 Downloading - Downloading input data...\n2020-08-05 14:03:16 Training - Downloading the training image.\u001b[34mDocker entrypoint called with argument(s): train\u001b[0m\n\u001b[34mRunning default environment configuration script\u001b[0m\n\u001b[34m[08/05/2020 14:03:39 INFO 139620994029376] Reading default configuration from /opt/amazon/lib/python2.7/site-packages/algorithm/resources/default-input.json: {u'loss_insensitivity': u'0.01', u'epochs': u'15', u'feature_dim': u'auto', u'init_bias': u'0.0', u'lr_scheduler_factor': u'auto', u'num_calibration_samples': u'10000000', u'accuracy_top_k': u'3', u'_num_kv_servers': u'auto', u'use_bias': u'true', u'num_point_for_scaler': u'10000', u'_log_level': u'info', u'quantile': u'0.5', u'bias_lr_mult': u'auto', u'lr_scheduler_step': u'auto', u'init_method': u'uniform', u'init_sigma': u'0.01', u'lr_scheduler_minimum_lr': u'auto', u'target_recall': u'0.8', u'num_models': u'auto', u'early_stopping_patience': u'3', u'momentum': u'auto', u'unbias_label': u'auto', u'wd': u'auto', u'optimizer': u'auto', u'_tuning_objective_metric': u'', u'early_stopping_tolerance': u'0.001', u'learning_rate': u'auto', u'_kvstore': u'auto', u'normalize_data': u'true', u'binary_classifier_model_selection_criteria': u'accuracy', u'use_lr_scheduler': u'true', u'target_precision': u'0.8', u'unbias_data': u'auto', u'init_scale': u'0.07', u'bias_wd_mult': u'auto', u'f_beta': u'1.0', u'mini_batch_size': u'1000', u'huber_delta': u'1.0', u'num_classes': u'1', u'beta_1': u'auto', u'loss': u'auto', u'beta_2': u'auto', u'_enable_profiler': u'false', u'normalize_label': u'auto', u'_num_gpus': u'auto', u'balance_multiclass_weights': u'false', u'positive_example_weight_mult': u'1.0', u'l1': u'auto', u'margin': u'1.0'}\u001b[0m\n\u001b[34m[08/05/2020 14:03:39 INFO 139620994029376] Merging with provided configuration from /opt/ml/input/config/hyperparameters.json: {u'feature_dim': u'13', u'mini_batch_size': u'200', u'predictor_type': u'regressor'}\u001b[0m\n\u001b[34m[08/05/2020 14:03:39 INFO 139620994029376] Final configuration: {u'loss_insensitivity': u'0.01', u'epochs': u'15', u'feature_dim': u'13', u'init_bias': u'0.0', u'lr_scheduler_factor': u'auto', u'num_calibration_samples': u'10000000', u'accuracy_top_k': u'3', u'_num_kv_servers': u'auto', u'use_bias': u'true', u'num_point_for_scaler': u'10000', u'_log_level': u'info', u'quantile': u'0.5', u'bias_lr_mult': u'auto', u'lr_scheduler_step': u'auto', u'init_method': u'uniform', u'init_sigma': u'0.01', u'lr_scheduler_minimum_lr': u'auto', u'target_recall': u'0.8', u'num_models': u'auto', u'early_stopping_patience': u'3', u'momentum': u'auto', u'unbias_label': u'auto', u'wd': u'auto', u'optimizer': u'auto', u'_tuning_objective_metric': u'', u'early_stopping_tolerance': u'0.001', u'learning_rate': u'auto', u'_kvstore': u'auto', u'normalize_data': u'true', u'binary_classifier_model_selection_criteria': u'accuracy', u'use_lr_scheduler': u'true', u'target_precision': u'0.8', u'unbias_data': u'auto', u'init_scale': u'0.07', u'bias_wd_mult': u'auto', u'f_beta': u'1.0', u'mini_batch_size': u'200', u'huber_delta': u'1.0', u'num_classes': u'1', u'predictor_type': u'regressor', u'beta_1': u'auto', u'loss': u'auto', u'beta_2': u'auto', u'_enable_profiler': u'false', u'normalize_label': u'auto', u'_num_gpus': u'auto', u'balance_multiclass_weights': u'false', u'positive_example_weight_mult': u'1.0', u'l1': u'auto', u'margin': u'1.0'}\u001b[0m\n\u001b[34m[08/05/2020 14:03:39 WARNING 139620994029376] Loggers have already been setup.\u001b[0m\n\u001b[34mProcess 1 is a worker.\u001b[0m\n\u001b[34m[08/05/2020 14:03:39 INFO 139620994029376] Using default worker.\u001b[0m\n\u001b[34m[08/05/2020 14:03:39 INFO 139620994029376] Checkpoint loading and saving are disabled.\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Create Store: local\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Scaler algorithm parameters\n <algorithm.scaler.ScalerAlgorithmStable object at 0x7efbad248d10>\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Scaling model computed with parameters:\n {'stdev_weight': \u001b[0m\n\u001b[34m[8.4458857e+00 2.1324451e+01 6.7391076e+00 2.8618175e-01 1.1554527e-01\n 7.8246862e-01 2.8094610e+01 2.0537193e+00 8.4978762e+00 1.6578876e+02\n 2.1248786e+00 9.3964310e+01 7.8991852e+00]\u001b[0m\n\u001b[34m<NDArray 13 @cpu(0)>, 'stdev_label': \u001b[0m\n\u001b[34m[9.859176]\u001b[0m\n\u001b[34m<NDArray 1 @cpu(0)>, 'mean_label': \u001b[0m\n\u001b[34m[22.5775]\u001b[0m\n\u001b[34m<NDArray 1 @cpu(0)>, 'mean_weight': \u001b[0m\n\u001b[34m[3.7224643e+00 1.0107500e+01 1.0820400e+01 9.0000004e-02 5.5104399e-01\n 6.3082299e+00 6.7886002e+01 3.8490415e+00 9.3100004e+00 4.0271500e+02\n 1.8406500e+01 3.5667120e+02 1.3100551e+01]\u001b[0m\n\u001b[34m<NDArray 13 @cpu(0)>}\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] nvidia-smi took: 0.0252351760864 secs to identify 0 gpus\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Number of GPUs being used: 0\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 0, \"sum\": 0.0, \"min\": 0}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 0, \"sum\": 0.0, \"min\": 0}, \"Total Batches Seen\": {\"count\": 1, \"max\": 3, \"sum\": 3.0, \"min\": 3}, \"Total Records Seen\": {\"count\": 1, \"max\": 427, \"sum\": 427.0, \"min\": 427}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Reset Count\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}}, \"EndTime\": 1596636220.131612, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"init_train_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\"}, \"StartTime\": 1596636220.131566}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9611653900146484, \"sum\": 0.9611653900146484, \"min\": 0.9611653900146484}}, \"EndTime\": 1596636220.169403, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.169326}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.872203598022461, \"sum\": 0.872203598022461, \"min\": 0.872203598022461}}, \"EndTime\": 1596636220.169496, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.169482}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.3584893798828126, \"sum\": 1.3584893798828126, \"min\": 1.3584893798828126}}, \"EndTime\": 1596636220.169555, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.169539}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8590465545654297, \"sum\": 0.8590465545654297, \"min\": 0.8590465545654297}}, \"EndTime\": 1596636220.169619, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.169606}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8566489410400391, \"sum\": 0.8566489410400391, \"min\": 0.8566489410400391}}, \"EndTime\": 1596636220.169708, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.16969}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0241845703125, \"sum\": 1.0241845703125, \"min\": 1.0241845703125}}, \"EndTime\": 1596636220.169769, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.169753}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.1434259796142578, \"sum\": 1.1434259796142578, \"min\": 1.1434259796142578}}, \"EndTime\": 1596636220.169825, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.16981}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.124141845703125, \"sum\": 1.124141845703125, \"min\": 1.124141845703125}}, \"EndTime\": 1596636220.169892, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.169875}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9338525390625, \"sum\": 0.9338525390625, \"min\": 0.9338525390625}}, \"EndTime\": 1596636220.169961, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.169942}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.4533041381835938, \"sum\": 1.4533041381835938, \"min\": 1.4533041381835938}}, \"EndTime\": 1596636220.170023, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.170005}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.862180404663086, \"sum\": 0.862180404663086, \"min\": 0.862180404663086}}, \"EndTime\": 1596636220.170086, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.170069}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.137659683227539, \"sum\": 1.137659683227539, \"min\": 1.137659683227539}}, \"EndTime\": 1596636220.170151, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.170133}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0150443267822267, \"sum\": 1.0150443267822267, \"min\": 1.0150443267822267}}, \"EndTime\": 1596636220.170218, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.170201}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0227135467529296, \"sum\": 1.0227135467529296, \"min\": 1.0227135467529296}}, \"EndTime\": 1596636220.170286, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.170269}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.874876480102539, \"sum\": 0.874876480102539, \"min\": 0.874876480102539}}, \"EndTime\": 1596636220.170347, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.17033}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0273975372314452, \"sum\": 1.0273975372314452, \"min\": 1.0273975372314452}}, \"EndTime\": 1596636220.170413, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.170396}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0496857452392578, \"sum\": 1.0496857452392578, \"min\": 1.0496857452392578}}, \"EndTime\": 1596636220.170481, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.170464}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8572389221191407, \"sum\": 0.8572389221191407, \"min\": 0.8572389221191407}}, \"EndTime\": 1596636220.170549, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.170532}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.2292351531982422, \"sum\": 1.2292351531982422, \"min\": 1.2292351531982422}}, \"EndTime\": 1596636220.170617, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.1706}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0399484252929687, \"sum\": 1.0399484252929687, \"min\": 1.0399484252929687}}, \"EndTime\": 1596636220.170686, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.170669}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7589112091064453, \"sum\": 0.7589112091064453, \"min\": 0.7589112091064453}}, \"EndTime\": 1596636220.170747, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.17073}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.1678211975097657, \"sum\": 1.1678211975097657, \"min\": 1.1678211975097657}}, \"EndTime\": 1596636220.170805, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.170789}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7602946472167968, \"sum\": 0.7602946472167968, \"min\": 0.7602946472167968}}, \"EndTime\": 1596636220.170862, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.170846}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8650836944580078, \"sum\": 0.8650836944580078, \"min\": 0.8650836944580078}}, \"EndTime\": 1596636220.170919, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.170904}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9523546600341797, \"sum\": 0.9523546600341797, \"min\": 0.9523546600341797}}, \"EndTime\": 1596636220.170981, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.170966}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9380184173583984, \"sum\": 0.9380184173583984, \"min\": 0.9380184173583984}}, \"EndTime\": 1596636220.171034, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.17102}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.2352330017089843, \"sum\": 1.2352330017089843, \"min\": 1.2352330017089843}}, \"EndTime\": 1596636220.171089, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.171073}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.1495844268798827, \"sum\": 1.1495844268798827, \"min\": 1.1495844268798827}}, \"EndTime\": 1596636220.171144, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.171129}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0251093292236328, \"sum\": 1.0251093292236328, \"min\": 1.0251093292236328}}, \"EndTime\": 1596636220.171206, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.171189}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9321627807617188, \"sum\": 0.9321627807617188, \"min\": 0.9321627807617188}}, \"EndTime\": 1596636220.171278, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.17126}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.1153968811035155, \"sum\": 1.1153968811035155, \"min\": 1.1153968811035155}}, \"EndTime\": 1596636220.171348, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.17133}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.1502596282958983, \"sum\": 1.1502596282958983, \"min\": 1.1502596282958983}}, \"EndTime\": 1596636220.17142, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.171402}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #quality_metric: host=algo-1, epoch=0, train mse_objective <loss>=0.961165390015\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 78.24750627790179, \"sum\": 78.24750627790179, \"min\": 78.24750627790179}}, \"EndTime\": 1596636220.223515, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.223416}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 68.47714669363839, \"sum\": 68.47714669363839, \"min\": 68.47714669363839}}, \"EndTime\": 1596636220.223621, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.2236}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 102.79734584263393, \"sum\": 102.79734584263393, \"min\": 102.79734584263393}}, \"EndTime\": 1596636220.223686, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.223667}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 70.99328177315849, \"sum\": 70.99328177315849, \"min\": 70.99328177315849}}, \"EndTime\": 1596636220.223754, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.223736}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 71.25920758928571, \"sum\": 71.25920758928571, \"min\": 71.25920758928571}}, \"EndTime\": 1596636220.22382, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.223803}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 47.067108154296875, \"sum\": 47.067108154296875, \"min\": 47.067108154296875}}, \"EndTime\": 1596636220.223873, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.223859}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 59.08453369140625, \"sum\": 59.08453369140625, \"min\": 59.08453369140625}}, \"EndTime\": 1596636220.223928, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.223913}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 58.83188738141741, \"sum\": 58.83188738141741, \"min\": 58.83188738141741}}, \"EndTime\": 1596636220.223983, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.223968}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 69.55882045200893, \"sum\": 69.55882045200893, \"min\": 69.55882045200893}}, \"EndTime\": 1596636220.224043, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.224026}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 105.24265834263393, \"sum\": 105.24265834263393, \"min\": 105.24265834263393}}, \"EndTime\": 1596636220.224113, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.224094}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 68.68609619140625, \"sum\": 68.68609619140625, \"min\": 68.68609619140625}}, \"EndTime\": 1596636220.224185, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.224165}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 83.57977294921875, \"sum\": 83.57977294921875, \"min\": 83.57977294921875}}, \"EndTime\": 1596636220.224256, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.224237}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 62.783905029296875, \"sum\": 62.783905029296875, \"min\": 62.783905029296875}}, \"EndTime\": 1596636220.224327, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.224308}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 66.25071062360492, \"sum\": 66.25071062360492, \"min\": 66.25071062360492}}, \"EndTime\": 1596636220.224397, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.224379}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 73.99017333984375, \"sum\": 73.99017333984375, \"min\": 73.99017333984375}}, \"EndTime\": 1596636220.224467, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.224448}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 52.540928431919646, \"sum\": 52.540928431919646, \"min\": 52.540928431919646}}, \"EndTime\": 1596636220.224536, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.224517}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 80.27535574776786, \"sum\": 80.27535574776786, \"min\": 80.27535574776786}}, \"EndTime\": 1596636220.224607, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.224588}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 65.17694527762276, \"sum\": 65.17694527762276, \"min\": 65.17694527762276}}, \"EndTime\": 1596636220.224667, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.224651}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 96.98881312779018, \"sum\": 96.98881312779018, \"min\": 96.98881312779018}}, \"EndTime\": 1596636220.224725, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.22471}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 83.52301025390625, \"sum\": 83.52301025390625, \"min\": 83.52301025390625}}, \"EndTime\": 1596636220.224778, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.224764}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 77.66139439174107, \"sum\": 77.66139439174107, \"min\": 77.66139439174107}}, \"EndTime\": 1596636220.224833, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.224817}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 53.056222098214285, \"sum\": 53.056222098214285, \"min\": 53.056222098214285}}, \"EndTime\": 1596636220.224892, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.224875}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 62.255689348493306, \"sum\": 62.255689348493306, \"min\": 62.255689348493306}}, \"EndTime\": 1596636220.224963, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.224943}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 60.52202933175223, \"sum\": 60.52202933175223, \"min\": 60.52202933175223}}, \"EndTime\": 1596636220.225037, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.225017}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 81.67854527064732, \"sum\": 81.67854527064732, \"min\": 81.67854527064732}}, \"EndTime\": 1596636220.225103, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.225084}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 79.95903669084821, \"sum\": 79.95903669084821, \"min\": 79.95903669084821}}, \"EndTime\": 1596636220.225176, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.225157}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 103.10022844587054, \"sum\": 103.10022844587054, \"min\": 103.10022844587054}}, \"EndTime\": 1596636220.225246, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.225227}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 90.3426513671875, \"sum\": 90.3426513671875, \"min\": 90.3426513671875}}, \"EndTime\": 1596636220.225317, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.225298}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 89.70643833705357, \"sum\": 89.70643833705357, \"min\": 89.70643833705357}}, \"EndTime\": 1596636220.225386, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.225368}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 103.34828404017857, \"sum\": 103.34828404017857, \"min\": 103.34828404017857}}, \"EndTime\": 1596636220.225447, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.22543}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 69.40556117466518, \"sum\": 69.40556117466518, \"min\": 69.40556117466518}}, \"EndTime\": 1596636220.225516, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.225499}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 75.64864676339286, \"sum\": 75.64864676339286, \"min\": 75.64864676339286}}, \"EndTime\": 1596636220.225585, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.225566}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #quality_metric: host=algo-1, epoch=0, validation mse_objective <loss>=78.2475062779\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #early_stopping_criteria_metric: host=algo-1, epoch=0, criteria=mse_objective, value=47.0671081543\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Epoch 0: Loss improved. Updating best model\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Saving model for epoch: 0\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Saved checkpoint to \"/tmp/tmp2tGD5h/mx-mod-0000.params\"\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #progress_metric: host=algo-1, completed 6 % of epochs\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Total Batches Seen\": {\"count\": 1, \"max\": 5, \"sum\": 5.0, \"min\": 5}, \"Total Records Seen\": {\"count\": 1, \"max\": 654, \"sum\": 654.0, \"min\": 654}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Reset Count\": {\"count\": 1, \"max\": 3, \"sum\": 3.0, \"min\": 3}}, \"EndTime\": 1596636220.236259, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 0}, \"StartTime\": 1596636220.131879}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #throughput_metric: host=algo-1, train throughput=2172.07916247 records/second\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9048382568359375, \"sum\": 0.9048382568359375, \"min\": 0.9048382568359375}}, \"EndTime\": 1596636220.271307, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.2712}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8228231811523438, \"sum\": 0.8228231811523438, \"min\": 0.8228231811523438}}, \"EndTime\": 1596636220.271417, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.271396}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.2816416931152343, \"sum\": 1.2816416931152343, \"min\": 1.2816416931152343}}, \"EndTime\": 1596636220.271483, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.271467}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8100093078613281, \"sum\": 0.8100093078613281, \"min\": 0.8100093078613281}}, \"EndTime\": 1596636220.271544, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.271528}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6791785430908203, \"sum\": 0.6791785430908203, \"min\": 0.6791785430908203}}, \"EndTime\": 1596636220.271602, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.271588}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4653607177734375, \"sum\": 0.4653607177734375, \"min\": 0.4653607177734375}}, \"EndTime\": 1596636220.271663, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.271647}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6059736633300781, \"sum\": 0.6059736633300781, \"min\": 0.6059736633300781}}, \"EndTime\": 1596636220.271718, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.271704}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5858756637573242, \"sum\": 0.5858756637573242, \"min\": 0.5858756637573242}}, \"EndTime\": 1596636220.271774, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.271759}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8847760772705078, \"sum\": 0.8847760772705078, \"min\": 0.8847760772705078}}, \"EndTime\": 1596636220.271837, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.271822}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.3727023315429687, \"sum\": 1.3727023315429687, \"min\": 1.3727023315429687}}, \"EndTime\": 1596636220.271896, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.271882}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8169847106933594, \"sum\": 0.8169847106933594, \"min\": 0.8169847106933594}}, \"EndTime\": 1596636220.271951, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.271937}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0750876617431642, \"sum\": 1.0750876617431642, \"min\": 1.0750876617431642}}, \"EndTime\": 1596636220.272002, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.271989}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6131308364868164, \"sum\": 0.6131308364868164, \"min\": 0.6131308364868164}}, \"EndTime\": 1596636220.272054, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.272041}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6543889617919922, \"sum\": 0.6543889617919922, \"min\": 0.6543889617919922}}, \"EndTime\": 1596636220.272106, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.272093}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7166834259033203, \"sum\": 0.7166834259033203, \"min\": 0.7166834259033203}}, \"EndTime\": 1596636220.272159, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.272145}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5036476898193359, \"sum\": 0.5036476898193359, \"min\": 0.5036476898193359}}, \"EndTime\": 1596636220.272212, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.272198}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.989932861328125, \"sum\": 0.989932861328125, \"min\": 0.989932861328125}}, \"EndTime\": 1596636220.272263, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.27225}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.813194808959961, \"sum\": 0.813194808959961, \"min\": 0.813194808959961}}, \"EndTime\": 1596636220.272316, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.272302}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.1644004058837891, \"sum\": 1.1644004058837891, \"min\": 1.1644004058837891}}, \"EndTime\": 1596636220.272368, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.272354}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9825169372558594, \"sum\": 0.9825169372558594, \"min\": 0.9825169372558594}}, \"EndTime\": 1596636220.272428, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.272414}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.726763916015625, \"sum\": 0.726763916015625, \"min\": 0.726763916015625}}, \"EndTime\": 1596636220.27249, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.272475}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5521905517578125, \"sum\": 0.5521905517578125, \"min\": 0.5521905517578125}}, \"EndTime\": 1596636220.272545, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.272531}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5859183883666992, \"sum\": 0.5859183883666992, \"min\": 0.5859183883666992}}, \"EndTime\": 1596636220.272598, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.272585}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6083607482910156, \"sum\": 0.6083607482910156, \"min\": 0.6083607482910156}}, \"EndTime\": 1596636220.272653, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.272639}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9582553100585938, \"sum\": 0.9582553100585938, \"min\": 0.9582553100585938}}, \"EndTime\": 1596636220.272708, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.272694}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.950558853149414, \"sum\": 0.950558853149414, \"min\": 0.950558853149414}}, \"EndTime\": 1596636220.272763, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.27275}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.1906998443603516, \"sum\": 1.1906998443603516, \"min\": 1.1906998443603516}}, \"EndTime\": 1596636220.272815, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.272802}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.1228743743896485, \"sum\": 1.1228743743896485, \"min\": 1.1228743743896485}}, \"EndTime\": 1596636220.27287, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.272856}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0259148406982421, \"sum\": 1.0259148406982421, \"min\": 1.0259148406982421}}, \"EndTime\": 1596636220.272921, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.272908}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.2541625213623047, \"sum\": 1.2541625213623047, \"min\": 1.2541625213623047}}, \"EndTime\": 1596636220.272972, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.272959}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.893201904296875, \"sum\": 0.893201904296875, \"min\": 0.893201904296875}}, \"EndTime\": 1596636220.273022, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.273009}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8814334106445313, \"sum\": 0.8814334106445313, \"min\": 0.8814334106445313}}, \"EndTime\": 1596636220.273075, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.273061}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #quality_metric: host=algo-1, epoch=1, train mse_objective <loss>=0.904838256836\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 74.74734933035714, \"sum\": 74.74734933035714, \"min\": 74.74734933035714}}, \"EndTime\": 1596636220.327008, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.326948}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 65.76561192103794, \"sum\": 65.76561192103794, \"min\": 65.76561192103794}}, \"EndTime\": 1596636220.327083, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.32707}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 97.84104701450893, \"sum\": 97.84104701450893, \"min\": 97.84104701450893}}, \"EndTime\": 1596636220.327145, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.327128}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 68.07901872907367, \"sum\": 68.07901872907367, \"min\": 68.07901872907367}}, \"EndTime\": 1596636220.327213, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.327196}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 102.77403913225446, \"sum\": 102.77403913225446, \"min\": 102.77403913225446}}, \"EndTime\": 1596636220.327277, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.32726}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 110.51983642578125, \"sum\": 110.51983642578125, \"min\": 110.51983642578125}}, \"EndTime\": 1596636220.327339, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.327322}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 116.91694859095982, \"sum\": 116.91694859095982, \"min\": 116.91694859095982}}, \"EndTime\": 1596636220.327411, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.327393}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 117.60636683872768, \"sum\": 117.60636683872768, \"min\": 117.60636683872768}}, \"EndTime\": 1596636220.327477, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.327459}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 67.10769653320312, \"sum\": 67.10769653320312, \"min\": 67.10769653320312}}, \"EndTime\": 1596636220.327538, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.327522}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 99.94106619698661, \"sum\": 99.94106619698661, \"min\": 99.94106619698661}}, \"EndTime\": 1596636220.3276, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.327583}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 66.27735246930804, \"sum\": 66.27735246930804, \"min\": 66.27735246930804}}, \"EndTime\": 1596636220.327665, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.327647}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 80.09776960100446, \"sum\": 80.09776960100446, \"min\": 80.09776960100446}}, \"EndTime\": 1596636220.327726, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.327709}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 115.42562430245536, \"sum\": 115.42562430245536, \"min\": 115.42562430245536}}, \"EndTime\": 1596636220.32779, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.327773}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 110.42515345982143, \"sum\": 110.42515345982143, \"min\": 110.42515345982143}}, \"EndTime\": 1596636220.327853, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.327836}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 101.08272879464286, \"sum\": 101.08272879464286, \"min\": 101.08272879464286}}, \"EndTime\": 1596636220.327913, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.327897}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 116.11233956473214, \"sum\": 116.11233956473214, \"min\": 116.11233956473214}}, \"EndTime\": 1596636220.327972, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.327955}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 76.51331438337054, \"sum\": 76.51331438337054, \"min\": 76.51331438337054}}, \"EndTime\": 1596636220.328041, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.328023}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 63.298187255859375, \"sum\": 63.298187255859375, \"min\": 63.298187255859375}}, \"EndTime\": 1596636220.328104, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.328088}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 93.03311593191964, \"sum\": 93.03311593191964, \"min\": 93.03311593191964}}, \"EndTime\": 1596636220.328166, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.32815}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 79.92426409040179, \"sum\": 79.92426409040179, \"min\": 79.92426409040179}}, \"EndTime\": 1596636220.328225, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.32821}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 92.15549142020089, \"sum\": 92.15549142020089, \"min\": 92.15549142020089}}, \"EndTime\": 1596636220.328286, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.328268}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 103.01003592354911, \"sum\": 103.01003592354911, \"min\": 103.01003592354911}}, \"EndTime\": 1596636220.328336, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.328321}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 91.16812569754464, \"sum\": 91.16812569754464, \"min\": 91.16812569754464}}, \"EndTime\": 1596636220.328395, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.328379}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 84.68690708705357, \"sum\": 84.68690708705357, \"min\": 84.68690708705357}}, \"EndTime\": 1596636220.328461, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.328445}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 81.42247663225446, \"sum\": 81.42247663225446, \"min\": 81.42247663225446}}, \"EndTime\": 1596636220.328528, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.328511}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 80.65519496372768, \"sum\": 80.65519496372768, \"min\": 80.65519496372768}}, \"EndTime\": 1596636220.328592, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.328575}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 99.43959263392857, \"sum\": 99.43959263392857, \"min\": 99.43959263392857}}, \"EndTime\": 1596636220.328657, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.32864}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 89.51160539899554, \"sum\": 89.51160539899554, \"min\": 89.51160539899554}}, \"EndTime\": 1596636220.328716, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.3287}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 109.67857142857143, \"sum\": 109.67857142857143, \"min\": 109.67857142857143}}, \"EndTime\": 1596636220.328764, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.328755}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 113.45515659877232, \"sum\": 113.45515659877232, \"min\": 113.45515659877232}}, \"EndTime\": 1596636220.328811, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.328796}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 95.67272077287946, \"sum\": 95.67272077287946, \"min\": 95.67272077287946}}, \"EndTime\": 1596636220.328875, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.328859}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 103.35009765625, \"sum\": 103.35009765625, \"min\": 103.35009765625}}, \"EndTime\": 1596636220.328936, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.328919}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #quality_metric: host=algo-1, epoch=1, validation mse_objective <loss>=74.7473493304\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #early_stopping_criteria_metric: host=algo-1, epoch=1, criteria=mse_objective, value=63.2981872559\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Saving model for epoch: 1\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Saved checkpoint to \"/tmp/tmpIR3fHi/mx-mod-0000.params\"\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #progress_metric: host=algo-1, completed 13 % of epochs\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Total Batches Seen\": {\"count\": 1, \"max\": 7, \"sum\": 7.0, \"min\": 7}, \"Total Records Seen\": {\"count\": 1, \"max\": 881, \"sum\": 881.0, \"min\": 881}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Reset Count\": {\"count\": 1, \"max\": 4, \"sum\": 4.0, \"min\": 4}}, \"EndTime\": 1596636220.334606, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 1}, \"StartTime\": 1596636220.236544}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #throughput_metric: host=algo-1, train throughput=2312.03362757 records/second\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8530387115478516, \"sum\": 0.8530387115478516, \"min\": 0.8530387115478516}}, \"EndTime\": 1596636220.363289, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.363192}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7781190490722656, \"sum\": 0.7781190490722656, \"min\": 0.7781190490722656}}, \"EndTime\": 1596636220.363389, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.363369}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.2091915130615234, \"sum\": 1.2091915130615234, \"min\": 1.2091915130615234}}, \"EndTime\": 1596636220.363458, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.36344}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7654522705078125, \"sum\": 0.7654522705078125, \"min\": 0.7654522705078125}}, \"EndTime\": 1596636220.363531, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.363514}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9297465515136719, \"sum\": 0.9297465515136719, \"min\": 0.9297465515136719}}, \"EndTime\": 1596636220.363595, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.363577}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.001320343017578, \"sum\": 1.001320343017578, \"min\": 1.001320343017578}}, \"EndTime\": 1596636220.363659, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.363642}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0718758392333985, \"sum\": 1.0718758392333985, \"min\": 1.0718758392333985}}, \"EndTime\": 1596636220.363724, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.363707}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0334678649902345, \"sum\": 1.0334678649902345, \"min\": 1.0334678649902345}}, \"EndTime\": 1596636220.363784, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.363769}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8400011444091797, \"sum\": 0.8400011444091797, \"min\": 0.8400011444091797}}, \"EndTime\": 1596636220.363844, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.363827}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.2964109802246093, \"sum\": 1.2964109802246093, \"min\": 1.2964109802246093}}, \"EndTime\": 1596636220.363908, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.36389}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7762201690673828, \"sum\": 0.7762201690673828, \"min\": 0.7762201690673828}}, \"EndTime\": 1596636220.363974, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.363957}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0170508575439454, \"sum\": 1.0170508575439454, \"min\": 1.0170508575439454}}, \"EndTime\": 1596636220.364037, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.364021}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0322129821777344, \"sum\": 1.0322129821777344, \"min\": 1.0322129821777344}}, \"EndTime\": 1596636220.364097, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.364082}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0212460327148438, \"sum\": 1.0212460327148438, \"min\": 1.0212460327148438}}, \"EndTime\": 1596636220.364155, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.36414}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9029623413085938, \"sum\": 0.9029623413085938, \"min\": 0.9029623413085938}}, \"EndTime\": 1596636220.364214, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.364197}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0356041717529296, \"sum\": 1.0356041717529296, \"min\": 1.0356041717529296}}, \"EndTime\": 1596636220.364278, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.364261}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9346100616455079, \"sum\": 0.9346100616455079, \"min\": 0.9346100616455079}}, \"EndTime\": 1596636220.364342, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.364326}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7738321685791015, \"sum\": 0.7738321685791015, \"min\": 0.7738321685791015}}, \"EndTime\": 1596636220.364403, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.364387}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.1040125274658203, \"sum\": 1.1040125274658203, \"min\": 1.1040125274658203}}, \"EndTime\": 1596636220.364466, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.364449}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9296357727050781, \"sum\": 0.9296357727050781, \"min\": 0.9296357727050781}}, \"EndTime\": 1596636220.364528, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.364511}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8108164215087891, \"sum\": 0.8108164215087891, \"min\": 0.8108164215087891}}, \"EndTime\": 1596636220.36459, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.364572}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.94494384765625, \"sum\": 0.94494384765625, \"min\": 0.94494384765625}}, \"EndTime\": 1596636220.364652, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.364636}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8260028839111329, \"sum\": 0.8260028839111329, \"min\": 0.8260028839111329}}, \"EndTime\": 1596636220.364717, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.364699}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.778726806640625, \"sum\": 0.778726806640625, \"min\": 0.778726806640625}}, \"EndTime\": 1596636220.36478, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.364762}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9538595581054687, \"sum\": 0.9538595581054687, \"min\": 0.9538595581054687}}, \"EndTime\": 1596636220.364841, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.364824}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.96058349609375, \"sum\": 0.96058349609375, \"min\": 0.96058349609375}}, \"EndTime\": 1596636220.364906, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.364888}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.1468984985351562, \"sum\": 1.1468984985351562, \"min\": 1.1468984985351562}}, \"EndTime\": 1596636220.364967, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.36495}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.1108269500732422, \"sum\": 1.1108269500732422, \"min\": 1.1108269500732422}}, \"EndTime\": 1596636220.36503, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.365012}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.1948683166503906, \"sum\": 1.1948683166503906, \"min\": 1.1948683166503906}}, \"EndTime\": 1596636220.365092, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.365074}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.2872807312011718, \"sum\": 1.2872807312011718, \"min\": 1.2872807312011718}}, \"EndTime\": 1596636220.365154, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.365136}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.1121293640136718, \"sum\": 1.1121293640136718, \"min\": 1.1121293640136718}}, \"EndTime\": 1596636220.365215, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.365197}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.1237210845947265, \"sum\": 1.1237210845947265, \"min\": 1.1237210845947265}}, \"EndTime\": 1596636220.365278, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.365261}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #quality_metric: host=algo-1, epoch=2, train mse_objective <loss>=0.853038711548\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 70.82250104631696, \"sum\": 70.82250104631696, \"min\": 70.82250104631696}}, \"EndTime\": 1596636220.423583, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.423491}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 62.706298828125, \"sum\": 62.706298828125, \"min\": 62.706298828125}}, \"EndTime\": 1596636220.423684, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.423664}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 92.3427734375, \"sum\": 92.3427734375, \"min\": 92.3427734375}}, \"EndTime\": 1596636220.423752, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.423734}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 64.83153424944196, \"sum\": 64.83153424944196, \"min\": 64.83153424944196}}, \"EndTime\": 1596636220.423824, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.423805}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 42.520948137555806, \"sum\": 42.520948137555806, \"min\": 42.520948137555806}}, \"EndTime\": 1596636220.423893, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.423875}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 65.05842808314732, \"sum\": 65.05842808314732, \"min\": 65.05842808314732}}, \"EndTime\": 1596636220.423957, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.42394}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 67.38193184988839, \"sum\": 67.38193184988839, \"min\": 67.38193184988839}}, \"EndTime\": 1596636220.424019, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.424002}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 66.95986066545758, \"sum\": 66.95986066545758, \"min\": 66.95986066545758}}, \"EndTime\": 1596636220.424077, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.424061}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 64.24301147460938, \"sum\": 64.24301147460938, \"min\": 64.24301147460938}}, \"EndTime\": 1596636220.424131, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.424117}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 94.27920096261161, \"sum\": 94.27920096261161, \"min\": 94.27920096261161}}, \"EndTime\": 1596636220.424188, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.424173}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 63.468819754464285, \"sum\": 63.468819754464285, \"min\": 63.468819754464285}}, \"EndTime\": 1596636220.424245, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.42423}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 76.07427106584821, \"sum\": 76.07427106584821, \"min\": 76.07427106584821}}, \"EndTime\": 1596636220.424307, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.424291}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 57.01048060825893, \"sum\": 57.01048060825893, \"min\": 57.01048060825893}}, \"EndTime\": 1596636220.424368, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.42435}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 54.62969534737723, \"sum\": 54.62969534737723, \"min\": 54.62969534737723}}, \"EndTime\": 1596636220.424439, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.42442}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 42.820521763392854, \"sum\": 42.820521763392854, \"min\": 42.820521763392854}}, \"EndTime\": 1596636220.424499, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.424483}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 64.00607735770089, \"sum\": 64.00607735770089, \"min\": 64.00607735770089}}, \"EndTime\": 1596636220.424568, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.424549}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 72.43051147460938, \"sum\": 72.43051147460938, \"min\": 72.43051147460938}}, \"EndTime\": 1596636220.424636, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.424618}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 60.678680419921875, \"sum\": 60.678680419921875, \"min\": 60.678680419921875}}, \"EndTime\": 1596636220.424703, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.424686}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 88.55329241071429, \"sum\": 88.55329241071429, \"min\": 88.55329241071429}}, \"EndTime\": 1596636220.424765, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.424747}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 76.05338832310268, \"sum\": 76.05338832310268, \"min\": 76.05338832310268}}, \"EndTime\": 1596636220.424825, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.424808}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 37.688607352120535, \"sum\": 37.688607352120535, \"min\": 37.688607352120535}}, \"EndTime\": 1596636220.424882, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.424866}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 54.48581804547991, \"sum\": 54.48581804547991, \"min\": 54.48581804547991}}, \"EndTime\": 1596636220.424949, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.424932}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 34.71415056501116, \"sum\": 34.71415056501116, \"min\": 34.71415056501116}}, \"EndTime\": 1596636220.425015, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.424998}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 37.15212140764509, \"sum\": 37.15212140764509, \"min\": 37.15212140764509}}, \"EndTime\": 1596636220.425082, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.425064}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 80.61318533761161, \"sum\": 80.61318533761161, \"min\": 80.61318533761161}}, \"EndTime\": 1596636220.425147, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.42513}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 80.49530901227679, \"sum\": 80.49530901227679, \"min\": 80.49530901227679}}, \"EndTime\": 1596636220.425206, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.42519}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 95.18546840122768, \"sum\": 95.18546840122768, \"min\": 95.18546840122768}}, \"EndTime\": 1596636220.425267, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.42525}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 88.00847516741071, \"sum\": 88.00847516741071, \"min\": 88.00847516741071}}, \"EndTime\": 1596636220.425326, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.425311}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 77.37960379464286, \"sum\": 77.37960379464286, \"min\": 77.37960379464286}}, \"EndTime\": 1596636220.425394, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.425377}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 73.36503383091518, \"sum\": 73.36503383091518, \"min\": 73.36503383091518}}, \"EndTime\": 1596636220.42546, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.425443}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 77.40117536272321, \"sum\": 77.40117536272321, \"min\": 77.40117536272321}}, \"EndTime\": 1596636220.425527, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.425509}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 80.95949881417411, \"sum\": 80.95949881417411, \"min\": 80.95949881417411}}, \"EndTime\": 1596636220.425587, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.42557}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #quality_metric: host=algo-1, epoch=2, validation mse_objective <loss>=70.8225010463\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #early_stopping_criteria_metric: host=algo-1, epoch=2, criteria=mse_objective, value=34.714150565\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Epoch 2: Loss improved. Updating best model\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Saving model for epoch: 2\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Saved checkpoint to \"/tmp/tmpoiy_UP/mx-mod-0000.params\"\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #progress_metric: host=algo-1, completed 20 % of epochs\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Total Batches Seen\": {\"count\": 1, \"max\": 9, \"sum\": 9.0, \"min\": 9}, \"Total Records Seen\": {\"count\": 1, \"max\": 1108, \"sum\": 1108.0, \"min\": 1108}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Reset Count\": {\"count\": 1, \"max\": 5, \"sum\": 5.0, \"min\": 5}}, \"EndTime\": 1596636220.435575, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 2}, \"StartTime\": 1596636220.334857}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #throughput_metric: host=algo-1, train throughput=2250.84399054 records/second\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8034776306152344, \"sum\": 0.8034776306152344, \"min\": 0.8034776306152344}}, \"EndTime\": 1596636220.470884, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.470791}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7354215240478515, \"sum\": 0.7354215240478515, \"min\": 0.7354215240478515}}, \"EndTime\": 1596636220.470986, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.470965}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.139169692993164, \"sum\": 1.139169692993164, \"min\": 1.139169692993164}}, \"EndTime\": 1596636220.471066, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.471046}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7232080078125, \"sum\": 0.7232080078125, \"min\": 0.7232080078125}}, \"EndTime\": 1596636220.471134, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.471116}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3858281326293945, \"sum\": 0.3858281326293945, \"min\": 0.3858281326293945}}, \"EndTime\": 1596636220.471198, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.47118}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6050418090820312, \"sum\": 0.6050418090820312, \"min\": 0.6050418090820312}}, \"EndTime\": 1596636220.471258, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.471242}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6368694686889649, \"sum\": 0.6368694686889649, \"min\": 0.6368694686889649}}, \"EndTime\": 1596636220.471331, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.471312}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6067627716064453, \"sum\": 0.6067627716064453, \"min\": 0.6067627716064453}}, \"EndTime\": 1596636220.471397, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.471379}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.797803955078125, \"sum\": 0.797803955078125, \"min\": 0.797803955078125}}, \"EndTime\": 1596636220.471466, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.471449}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.2226758575439454, \"sum\": 1.2226758575439454, \"min\": 1.2226758575439454}}, \"EndTime\": 1596636220.471531, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.471514}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7378363037109374, \"sum\": 0.7378363037109374, \"min\": 0.7378363037109374}}, \"EndTime\": 1596636220.471592, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.471576}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9612397003173828, \"sum\": 0.9612397003173828, \"min\": 0.9612397003173828}}, \"EndTime\": 1596636220.471651, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.471635}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5224584579467774, \"sum\": 0.5224584579467774, \"min\": 0.5224584579467774}}, \"EndTime\": 1596636220.47171, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.471695}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5152346420288086, \"sum\": 0.5152346420288086, \"min\": 0.5152346420288086}}, \"EndTime\": 1596636220.471768, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.471752}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.38478580474853513, \"sum\": 0.38478580474853513, \"min\": 0.38478580474853513}}, \"EndTime\": 1596636220.471832, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.471814}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5804007720947265, \"sum\": 0.5804007720947265, \"min\": 0.5804007720947265}}, \"EndTime\": 1596636220.471907, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.471888}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8817536926269531, \"sum\": 0.8817536926269531, \"min\": 0.8817536926269531}}, \"EndTime\": 1596636220.471968, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.471952}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7366322326660156, \"sum\": 0.7366322326660156, \"min\": 0.7366322326660156}}, \"EndTime\": 1596636220.47203, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.472013}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0460600280761718, \"sum\": 1.0460600280761718, \"min\": 1.0460600280761718}}, \"EndTime\": 1596636220.472087, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.472072}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.879061050415039, \"sum\": 0.879061050415039, \"min\": 0.879061050415039}}, \"EndTime\": 1596636220.472145, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.47213}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.35389225006103514, \"sum\": 0.35389225006103514, \"min\": 0.35389225006103514}}, \"EndTime\": 1596636220.472217, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.472198}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5073154830932617, \"sum\": 0.5073154830932617, \"min\": 0.5073154830932617}}, \"EndTime\": 1596636220.472285, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.472269}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.32550491333007814, \"sum\": 0.32550491333007814, \"min\": 0.32550491333007814}}, \"EndTime\": 1596636220.472348, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.472332}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.34145053863525393, \"sum\": 0.34145053863525393, \"min\": 0.34145053863525393}}, \"EndTime\": 1596636220.472406, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.472391}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9485013580322266, \"sum\": 0.9485013580322266, \"min\": 0.9485013580322266}}, \"EndTime\": 1596636220.472463, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.472449}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9652549743652343, \"sum\": 0.9652549743652343, \"min\": 0.9652549743652343}}, \"EndTime\": 1596636220.472516, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.472502}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.102833480834961, \"sum\": 1.102833480834961, \"min\": 1.102833480834961}}, \"EndTime\": 1596636220.472569, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.472555}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0961109924316406, \"sum\": 1.0961109924316406, \"min\": 1.0961109924316406}}, \"EndTime\": 1596636220.472623, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.472609}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9181232452392578, \"sum\": 0.9181232452392578, \"min\": 0.9181232452392578}}, \"EndTime\": 1596636220.472678, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.472664}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8693671417236328, \"sum\": 0.8693671417236328, \"min\": 0.8693671417236328}}, \"EndTime\": 1596636220.472731, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.472718}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9085729217529297, \"sum\": 0.9085729217529297, \"min\": 0.9085729217529297}}, \"EndTime\": 1596636220.472787, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.472773}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9340888214111328, \"sum\": 0.9340888214111328, \"min\": 0.9340888214111328}}, \"EndTime\": 1596636220.472852, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.472836}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #quality_metric: host=algo-1, epoch=3, train mse_objective <loss>=0.803477630615\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 67.32026890345982, \"sum\": 67.32026890345982, \"min\": 67.32026890345982}}, \"EndTime\": 1596636220.581388, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.581324}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 60.092110770089285, \"sum\": 60.092110770089285, \"min\": 60.092110770089285}}, \"EndTime\": 1596636220.581473, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.581459}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 87.23725237165179, \"sum\": 87.23725237165179, \"min\": 87.23725237165179}}, \"EndTime\": 1596636220.581532, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.581518}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 61.99536568777902, \"sum\": 61.99536568777902, \"min\": 61.99536568777902}}, \"EndTime\": 1596636220.581597, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.58158}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 41.610927036830354, \"sum\": 41.610927036830354, \"min\": 41.610927036830354}}, \"EndTime\": 1596636220.581682, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.581643}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 47.3597412109375, \"sum\": 47.3597412109375, \"min\": 47.3597412109375}}, \"EndTime\": 1596636220.581746, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.58173}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 48.461212158203125, \"sum\": 48.461212158203125, \"min\": 48.461212158203125}}, \"EndTime\": 1596636220.581801, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.581786}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 47.54587663922991, \"sum\": 47.54587663922991, \"min\": 47.54587663922991}}, \"EndTime\": 1596636220.581858, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.581841}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 61.741280691964285, \"sum\": 61.741280691964285, \"min\": 61.741280691964285}}, \"EndTime\": 1596636220.581915, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.581899}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 88.99681745256696, \"sum\": 88.99681745256696, \"min\": 88.99681745256696}}, \"EndTime\": 1596636220.581976, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.581959}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 61.050162179129465, \"sum\": 61.050162179129465, \"min\": 61.050162179129465}}, \"EndTime\": 1596636220.582043, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.582026}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 72.47118268694196, \"sum\": 72.47118268694196, \"min\": 72.47118268694196}}, \"EndTime\": 1596636220.582104, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.582089}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 43.74304635184152, \"sum\": 43.74304635184152, \"min\": 43.74304635184152}}, \"EndTime\": 1596636220.582168, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.582152}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 41.62904575892857, \"sum\": 41.62904575892857, \"min\": 41.62904575892857}}, \"EndTime\": 1596636220.582221, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.582205}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 41.507293701171875, \"sum\": 41.507293701171875, \"min\": 41.507293701171875}}, \"EndTime\": 1596636220.582284, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.582268}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 48.89955793108259, \"sum\": 48.89955793108259, \"min\": 48.89955793108259}}, \"EndTime\": 1596636220.582343, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.582327}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 68.75144304547992, \"sum\": 68.75144304547992, \"min\": 68.75144304547992}}, \"EndTime\": 1596636220.582395, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.582379}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 58.35467093331473, \"sum\": 58.35467093331473, \"min\": 58.35467093331473}}, \"EndTime\": 1596636220.582449, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.582434}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 84.47289167131696, \"sum\": 84.47289167131696, \"min\": 84.47289167131696}}, \"EndTime\": 1596636220.582505, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.582489}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 72.60818045479911, \"sum\": 72.60818045479911, \"min\": 72.60818045479911}}, \"EndTime\": 1596636220.58256, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.582546}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 53.16917201450893, \"sum\": 53.16917201450893, \"min\": 53.16917201450893}}, \"EndTime\": 1596636220.582612, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.582598}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 42.842821393694194, \"sum\": 42.842821393694194, \"min\": 42.842821393694194}}, \"EndTime\": 1596636220.582668, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.582652}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 50.47522408621652, \"sum\": 50.47522408621652, \"min\": 50.47522408621652}}, \"EndTime\": 1596636220.582727, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.582711}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 43.741612025669646, \"sum\": 43.741612025669646, \"min\": 43.741612025669646}}, \"EndTime\": 1596636220.582787, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.582769}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 80.22943115234375, \"sum\": 80.22943115234375, \"min\": 80.22943115234375}}, \"EndTime\": 1596636220.582849, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.582832}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 80.19029017857143, \"sum\": 80.19029017857143, \"min\": 80.19029017857143}}, \"EndTime\": 1596636220.58291, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.582893}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 91.39975411551339, \"sum\": 91.39975411551339, \"min\": 91.39975411551339}}, \"EndTime\": 1596636220.582969, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.582953}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 86.2806396484375, \"sum\": 86.2806396484375, \"min\": 86.2806396484375}}, \"EndTime\": 1596636220.583035, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.583018}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 81.17832728794643, \"sum\": 81.17832728794643, \"min\": 81.17832728794643}}, \"EndTime\": 1596636220.583087, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.583072}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 76.33183070591518, \"sum\": 76.33183070591518, \"min\": 76.33183070591518}}, \"EndTime\": 1596636220.583146, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.583129}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 92.20086669921875, \"sum\": 92.20086669921875, \"min\": 92.20086669921875}}, \"EndTime\": 1596636220.583208, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.583191}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 87.79660470145089, \"sum\": 87.79660470145089, \"min\": 87.79660470145089}}, \"EndTime\": 1596636220.583271, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.583254}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #quality_metric: host=algo-1, epoch=3, validation mse_objective <loss>=67.3202689035\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #early_stopping_criteria_metric: host=algo-1, epoch=3, criteria=mse_objective, value=41.5072937012\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Saving model for epoch: 3\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Saved checkpoint to \"/tmp/tmpYxuSUj/mx-mod-0000.params\"\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #progress_metric: host=algo-1, completed 26 % of epochs\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Total Batches Seen\": {\"count\": 1, \"max\": 11, \"sum\": 11.0, \"min\": 11}, \"Total Records Seen\": {\"count\": 1, \"max\": 1335, \"sum\": 1335.0, \"min\": 1335}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Reset Count\": {\"count\": 1, \"max\": 6, \"sum\": 6.0, \"min\": 6}}, \"EndTime\": 1596636220.588971, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 3}, \"StartTime\": 1596636220.435871}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #throughput_metric: host=algo-1, train throughput=1481.31836208 records/second\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7589236450195312, \"sum\": 0.7589236450195312, \"min\": 0.7589236450195312}}, \"EndTime\": 1596636220.620353, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.620257}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6978939819335938, \"sum\": 0.6978939819335938, \"min\": 0.6978939819335938}}, \"EndTime\": 1596636220.620453, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.620434}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0740135192871094, \"sum\": 1.0740135192871094, \"min\": 1.0740135192871094}}, \"EndTime\": 1596636220.620529, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.620511}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6858953857421874, \"sum\": 0.6858953857421874, \"min\": 0.6858953857421874}}, \"EndTime\": 1596636220.620605, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.620585}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.42433383941650393, \"sum\": 0.42433383941650393, \"min\": 0.42433383941650393}}, \"EndTime\": 1596636220.620671, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.620653}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4752368927001953, \"sum\": 0.4752368927001953, \"min\": 0.4752368927001953}}, \"EndTime\": 1596636220.62073, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.620715}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.49608402252197265, \"sum\": 0.49608402252197265, \"min\": 0.49608402252197265}}, \"EndTime\": 1596636220.620791, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.620775}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.48118244171142577, \"sum\": 0.48118244171142577, \"min\": 0.48118244171142577}}, \"EndTime\": 1596636220.620849, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.620833}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7603107452392578, \"sum\": 0.7603107452392578, \"min\": 0.7603107452392578}}, \"EndTime\": 1596636220.620906, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.620891}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.1537002563476562, \"sum\": 1.1537002563476562, \"min\": 1.1537002563476562}}, \"EndTime\": 1596636220.620963, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.620947}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.704283218383789, \"sum\": 0.704283218383789, \"min\": 0.704283218383789}}, \"EndTime\": 1596636220.621021, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.621005}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9104476165771485, \"sum\": 0.9104476165771485, \"min\": 0.9104476165771485}}, \"EndTime\": 1596636220.621087, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.621071}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4482228088378906, \"sum\": 0.4482228088378906, \"min\": 0.4482228088378906}}, \"EndTime\": 1596636220.621154, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.621137}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.42640209197998047, \"sum\": 0.42640209197998047, \"min\": 0.42640209197998047}}, \"EndTime\": 1596636220.621211, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.621196}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.423814697265625, \"sum\": 0.423814697265625, \"min\": 0.423814697265625}}, \"EndTime\": 1596636220.621275, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.62126}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.48311840057373046, \"sum\": 0.48311840057373046, \"min\": 0.48311840057373046}}, \"EndTime\": 1596636220.621337, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.621322}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8338124084472657, \"sum\": 0.8338124084472657, \"min\": 0.8338124084472657}}, \"EndTime\": 1596636220.621395, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.62138}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7035746002197265, \"sum\": 0.7035746002197265, \"min\": 0.7035746002197265}}, \"EndTime\": 1596636220.621451, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.621436}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9930442810058594, \"sum\": 0.9930442810058594, \"min\": 0.9930442810058594}}, \"EndTime\": 1596636220.621505, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.621491}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8335458374023438, \"sum\": 0.8335458374023438, \"min\": 0.8335458374023438}}, \"EndTime\": 1596636220.621559, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.621545}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5837223815917969, \"sum\": 0.5837223815917969, \"min\": 0.5837223815917969}}, \"EndTime\": 1596636220.621613, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.621599}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.44094142913818357, \"sum\": 0.44094142913818357, \"min\": 0.44094142913818357}}, \"EndTime\": 1596636220.621695, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.621677}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5463008117675782, \"sum\": 0.5463008117675782, \"min\": 0.5463008117675782}}, \"EndTime\": 1596636220.621752, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.621738}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.45469600677490235, \"sum\": 0.45469600677490235, \"min\": 0.45469600677490235}}, \"EndTime\": 1596636220.621804, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.62179}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9486573791503906, \"sum\": 0.9486573791503906, \"min\": 0.9486573791503906}}, \"EndTime\": 1596636220.62186, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.621845}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9669514465332031, \"sum\": 0.9669514465332031, \"min\": 0.9669514465332031}}, \"EndTime\": 1596636220.621917, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.621902}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0641648864746094, \"sum\": 1.0641648864746094, \"min\": 1.0641648864746094}}, \"EndTime\": 1596636220.62197, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.621956}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0766274261474609, \"sum\": 1.0766274261474609, \"min\": 1.0766274261474609}}, \"EndTime\": 1596636220.622023, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.622009}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0427117156982422, \"sum\": 1.0427117156982422, \"min\": 1.0427117156982422}}, \"EndTime\": 1596636220.622075, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.622061}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9319218444824219, \"sum\": 0.9319218444824219, \"min\": 0.9319218444824219}}, \"EndTime\": 1596636220.622128, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.622114}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.1005396270751953, \"sum\": 1.1005396270751953, \"min\": 1.1005396270751953}}, \"EndTime\": 1596636220.622191, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.622176}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.1103705596923827, \"sum\": 1.1103705596923827, \"min\": 1.1103705596923827}}, \"EndTime\": 1596636220.622254, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.622239}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #quality_metric: host=algo-1, epoch=4, train mse_objective <loss>=0.75892364502\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 64.33911568777901, \"sum\": 64.33911568777901, \"min\": 64.33911568777901}}, \"EndTime\": 1596636220.671795, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.671741}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 57.951163155691965, \"sum\": 57.951163155691965, \"min\": 57.951163155691965}}, \"EndTime\": 1596636220.671861, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.671848}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 82.70381382533482, \"sum\": 82.70381382533482, \"min\": 82.70381382533482}}, \"EndTime\": 1596636220.671922, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.671907}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 59.646126883370535, \"sum\": 59.646126883370535, \"min\": 59.646126883370535}}, \"EndTime\": 1596636220.671983, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.671971}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 51.88358851841518, \"sum\": 51.88358851841518, \"min\": 51.88358851841518}}, \"EndTime\": 1596636220.672037, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.672021}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 42.183035714285715, \"sum\": 42.183035714285715, \"min\": 42.183035714285715}}, \"EndTime\": 1596636220.6721, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.672082}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 43.933436802455354, \"sum\": 43.933436802455354, \"min\": 43.933436802455354}}, \"EndTime\": 1596636220.672153, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.672142}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 44.10770525251116, \"sum\": 44.10770525251116, \"min\": 44.10770525251116}}, \"EndTime\": 1596636220.672186, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.672178}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 59.753858293805806, \"sum\": 59.753858293805806, \"min\": 59.753858293805806}}, \"EndTime\": 1596636220.672224, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.67221}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 84.23070417131696, \"sum\": 84.23070417131696, \"min\": 84.23070417131696}}, \"EndTime\": 1596636220.672282, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.672266}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 59.12410627092634, \"sum\": 59.12410627092634, \"min\": 59.12410627092634}}, \"EndTime\": 1596636220.672335, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.67232}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 69.42526245117188, \"sum\": 69.42526245117188, \"min\": 69.42526245117188}}, \"EndTime\": 1596636220.672393, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.672377}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 46.792393275669646, \"sum\": 46.792393275669646, \"min\": 46.792393275669646}}, \"EndTime\": 1596636220.672455, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.672439}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 45.532784598214285, \"sum\": 45.532784598214285, \"min\": 45.532784598214285}}, \"EndTime\": 1596636220.672514, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.672497}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 53.29984828404018, \"sum\": 53.29984828404018, \"min\": 53.29984828404018}}, \"EndTime\": 1596636220.672572, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.672557}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 46.635572160993306, \"sum\": 46.635572160993306, \"min\": 46.635572160993306}}, \"EndTime\": 1596636220.672619, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.672609}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 65.57249232700893, \"sum\": 65.57249232700893, \"min\": 65.57249232700893}}, \"EndTime\": 1596636220.672658, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.672644}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 56.52056448800223, \"sum\": 56.52056448800223, \"min\": 56.52056448800223}}, \"EndTime\": 1596636220.672719, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.672702}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 80.95668247767857, \"sum\": 80.95668247767857, \"min\": 80.95668247767857}}, \"EndTime\": 1596636220.672783, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.672766}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 69.63454328264508, \"sum\": 69.63454328264508, \"min\": 69.63454328264508}}, \"EndTime\": 1596636220.672847, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.672831}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 60.007520403180806, \"sum\": 60.007520403180806, \"min\": 60.007520403180806}}, \"EndTime\": 1596636220.672911, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.672894}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 48.2242431640625, \"sum\": 48.2242431640625, \"min\": 48.2242431640625}}, \"EndTime\": 1596636220.672974, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.672957}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 62.331878662109375, \"sum\": 62.331878662109375, \"min\": 62.331878662109375}}, \"EndTime\": 1596636220.673036, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.67302}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 57.09521920340402, \"sum\": 57.09521920340402, \"min\": 57.09521920340402}}, \"EndTime\": 1596636220.673093, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.673078}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 80.32817731584821, \"sum\": 80.32817731584821, \"min\": 80.32817731584821}}, \"EndTime\": 1596636220.673149, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.673135}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 79.90280587332589, \"sum\": 79.90280587332589, \"min\": 79.90280587332589}}, \"EndTime\": 1596636220.673201, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.673187}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 88.21437290736607, \"sum\": 88.21437290736607, \"min\": 88.21437290736607}}, \"EndTime\": 1596636220.673257, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.673241}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 84.51352364676339, \"sum\": 84.51352364676339, \"min\": 84.51352364676339}}, \"EndTime\": 1596636220.673298, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.673289}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 88.77975899832589, \"sum\": 88.77975899832589, \"min\": 88.77975899832589}}, \"EndTime\": 1596636220.673335, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.673321}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 85.19436209542411, \"sum\": 85.19436209542411, \"min\": 85.19436209542411}}, \"EndTime\": 1596636220.673393, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.673376}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 102.89630998883929, \"sum\": 102.89630998883929, \"min\": 102.89630998883929}}, \"EndTime\": 1596636220.673456, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.673439}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 93.14415631975446, \"sum\": 93.14415631975446, \"min\": 93.14415631975446}}, \"EndTime\": 1596636220.673519, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.673503}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #quality_metric: host=algo-1, epoch=4, validation mse_objective <loss>=64.3391156878\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #early_stopping_criteria_metric: host=algo-1, epoch=4, criteria=mse_objective, value=42.1830357143\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Saving model for epoch: 4\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Saved checkpoint to \"/tmp/tmpEdaEOH/mx-mod-0000.params\"\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #progress_metric: host=algo-1, completed 33 % of epochs\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Total Batches Seen\": {\"count\": 1, \"max\": 13, \"sum\": 13.0, \"min\": 13}, \"Total Records Seen\": {\"count\": 1, \"max\": 1562, \"sum\": 1562.0, \"min\": 1562}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Reset Count\": {\"count\": 1, \"max\": 7, \"sum\": 7.0, \"min\": 7}}, \"EndTime\": 1596636220.679, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 4}, \"StartTime\": 1596636220.58925}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #throughput_metric: host=algo-1, train throughput=2525.25503801 records/second\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.718111572265625, \"sum\": 0.718111572265625, \"min\": 0.718111572265625}}, \"EndTime\": 1596636220.699734, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.69968}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6639950561523438, \"sum\": 0.6639950561523438, \"min\": 0.6639950561523438}}, \"EndTime\": 1596636220.699797, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.699785}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0127093505859375, \"sum\": 1.0127093505859375, \"min\": 1.0127093505859375}}, \"EndTime\": 1596636220.699859, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.699842}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6523167419433594, \"sum\": 0.6523167419433594, \"min\": 0.6523167419433594}}, \"EndTime\": 1596636220.699935, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.699915}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5346123886108398, \"sum\": 0.5346123886108398, \"min\": 0.5346123886108398}}, \"EndTime\": 1596636220.700003, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.699986}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4343001937866211, \"sum\": 0.4343001937866211, \"min\": 0.4343001937866211}}, \"EndTime\": 1596636220.700063, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.700047}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.45478096008300783, \"sum\": 0.45478096008300783, \"min\": 0.45478096008300783}}, \"EndTime\": 1596636220.700116, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.700102}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.455242919921875, \"sum\": 0.455242919921875, \"min\": 0.455242919921875}}, \"EndTime\": 1596636220.700174, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.70016}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.726645736694336, \"sum\": 0.726645736694336, \"min\": 0.726645736694336}}, \"EndTime\": 1596636220.700234, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.70022}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0886349487304687, \"sum\": 1.0886349487304687, \"min\": 1.0886349487304687}}, \"EndTime\": 1596636220.700289, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.700273}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6743929290771484, \"sum\": 0.6743929290771484, \"min\": 0.6743929290771484}}, \"EndTime\": 1596636220.700327, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.700315}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8633979797363281, \"sum\": 0.8633979797363281, \"min\": 0.8633979797363281}}, \"EndTime\": 1596636220.700384, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.700367}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4848853302001953, \"sum\": 0.4848853302001953, \"min\": 0.4848853302001953}}, \"EndTime\": 1596636220.700439, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.700428}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4726906967163086, \"sum\": 0.4726906967163086, \"min\": 0.4726906967163086}}, \"EndTime\": 1596636220.700494, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.700478}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.541185302734375, \"sum\": 0.541185302734375, \"min\": 0.541185302734375}}, \"EndTime\": 1596636220.700553, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.700542}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.46762054443359374, \"sum\": 0.46762054443359374, \"min\": 0.46762054443359374}}, \"EndTime\": 1596636220.700611, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.700595}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7897593688964843, \"sum\": 0.7897593688964843, \"min\": 0.7897593688964843}}, \"EndTime\": 1596636220.700665, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.700655}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6733760833740234, \"sum\": 0.6733760833740234, \"min\": 0.6733760833740234}}, \"EndTime\": 1596636220.700732, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.700715}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9439054870605469, \"sum\": 0.9439054870605469, \"min\": 0.9439054870605469}}, \"EndTime\": 1596636220.700796, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.700779}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7918466186523437, \"sum\": 0.7918466186523437, \"min\": 0.7918466186523437}}, \"EndTime\": 1596636220.700847, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.700837}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6596467590332031, \"sum\": 0.6596467590332031, \"min\": 0.6596467590332031}}, \"EndTime\": 1596636220.700879, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.700871}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5176898956298828, \"sum\": 0.5176898956298828, \"min\": 0.5176898956298828}}, \"EndTime\": 1596636220.700909, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.700901}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6767399597167969, \"sum\": 0.6767399597167969, \"min\": 0.6767399597167969}}, \"EndTime\": 1596636220.700939, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.700931}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6014011383056641, \"sum\": 0.6014011383056641, \"min\": 0.6014011383056641}}, \"EndTime\": 1596636220.700969, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.700961}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9531016540527344, \"sum\": 0.9531016540527344, \"min\": 0.9531016540527344}}, \"EndTime\": 1596636220.701029, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.701013}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9659934997558594, \"sum\": 0.9659934997558594, \"min\": 0.9659934997558594}}, \"EndTime\": 1596636220.701086, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.701069}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0305809783935547, \"sum\": 1.0305809783935547, \"min\": 1.0305809783935547}}, \"EndTime\": 1596636220.70114, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.701123}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0521208953857422, \"sum\": 1.0521208953857422, \"min\": 1.0521208953857422}}, \"EndTime\": 1596636220.7012, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.701183}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.1385425567626952, \"sum\": 1.1385425567626952, \"min\": 1.1385425567626952}}, \"EndTime\": 1596636220.701258, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.701242}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0385865783691406, \"sum\": 1.0385865783691406, \"min\": 1.0385865783691406}}, \"EndTime\": 1596636220.701317, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.7013}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.2386492156982423, \"sum\": 1.2386492156982423, \"min\": 1.2386492156982423}}, \"EndTime\": 1596636220.701373, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.701358}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.198831558227539, \"sum\": 1.198831558227539, \"min\": 1.198831558227539}}, \"EndTime\": 1596636220.701434, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.701417}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #quality_metric: host=algo-1, epoch=5, train mse_objective <loss>=0.718111572266\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 61.785958426339285, \"sum\": 61.785958426339285, \"min\": 61.785958426339285}}, \"EndTime\": 1596636220.737316, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.737263}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 56.187935965401785, \"sum\": 56.187935965401785, \"min\": 56.187935965401785}}, \"EndTime\": 1596636220.737391, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.737371}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 78.64973667689732, \"sum\": 78.64973667689732, \"min\": 78.64973667689732}}, \"EndTime\": 1596636220.73746, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.737443}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 57.69669886997768, \"sum\": 57.69669886997768, \"min\": 57.69669886997768}}, \"EndTime\": 1596636220.737535, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.737516}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 39.405177525111604, \"sum\": 39.405177525111604, \"min\": 39.405177525111604}}, \"EndTime\": 1596636220.737597, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.737583}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 35.53327505929129, \"sum\": 35.53327505929129, \"min\": 35.53327505929129}}, \"EndTime\": 1596636220.73767, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.737643}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 37.407091413225444, \"sum\": 37.407091413225444, \"min\": 37.407091413225444}}, \"EndTime\": 1596636220.737727, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.73771}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 38.644077845982146, \"sum\": 38.644077845982146, \"min\": 38.644077845982146}}, \"EndTime\": 1596636220.73779, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.737772}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 58.19270542689732, \"sum\": 58.19270542689732, \"min\": 58.19270542689732}}, \"EndTime\": 1596636220.737861, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.737842}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 79.90786307198661, \"sum\": 79.90786307198661, \"min\": 79.90786307198661}}, \"EndTime\": 1596636220.737929, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.737911}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 57.58476039341518, \"sum\": 57.58476039341518, \"min\": 57.58476039341518}}, \"EndTime\": 1596636220.73799, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.737973}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 66.83094569614956, \"sum\": 66.83094569614956, \"min\": 66.83094569614956}}, \"EndTime\": 1596636220.738046, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.738033}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 40.77494158063616, \"sum\": 40.77494158063616, \"min\": 40.77494158063616}}, \"EndTime\": 1596636220.738079, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.738071}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 38.942269461495535, \"sum\": 38.942269461495535, \"min\": 38.942269461495535}}, \"EndTime\": 1596636220.738142, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.738126}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 42.46636962890625, \"sum\": 42.46636962890625, \"min\": 42.46636962890625}}, \"EndTime\": 1596636220.738209, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.738193}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 40.04136439732143, \"sum\": 40.04136439732143, \"min\": 40.04136439732143}}, \"EndTime\": 1596636220.738272, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.738256}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 62.815359933035715, \"sum\": 62.815359933035715, \"min\": 62.815359933035715}}, \"EndTime\": 1596636220.738331, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.738315}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 55.05852835518973, \"sum\": 55.05852835518973, \"min\": 55.05852835518973}}, \"EndTime\": 1596636220.738394, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.738377}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 77.90774100167411, \"sum\": 77.90774100167411, \"min\": 77.90774100167411}}, \"EndTime\": 1596636220.738456, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.738439}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 67.05066353934151, \"sum\": 67.05066353934151, \"min\": 67.05066353934151}}, \"EndTime\": 1596636220.738515, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.738498}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 37.267181396484375, \"sum\": 37.267181396484375, \"min\": 37.267181396484375}}, \"EndTime\": 1596636220.738577, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.73856}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 46.195726667131694, \"sum\": 46.195726667131694, \"min\": 46.195726667131694}}, \"EndTime\": 1596636220.738637, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.738621}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 41.84916469029018, \"sum\": 41.84916469029018, \"min\": 41.84916469029018}}, \"EndTime\": 1596636220.738689, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.738674}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 42.86379132952009, \"sum\": 42.86379132952009, \"min\": 42.86379132952009}}, \"EndTime\": 1596636220.738749, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.738732}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 80.74598911830357, \"sum\": 80.74598911830357, \"min\": 80.74598911830357}}, \"EndTime\": 1596636220.738808, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.738792}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 79.64444405691964, \"sum\": 79.64444405691964, \"min\": 79.64444405691964}}, \"EndTime\": 1596636220.738858, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.738843}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 85.55594308035714, \"sum\": 85.55594308035714, \"min\": 85.55594308035714}}, \"EndTime\": 1596636220.738917, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.7389}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 82.85203334263393, \"sum\": 82.85203334263393, \"min\": 82.85203334263393}}, \"EndTime\": 1596636220.738968, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.738951}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 81.79261125837054, \"sum\": 81.79261125837054, \"min\": 81.79261125837054}}, \"EndTime\": 1596636220.739018, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.739003}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 82.02801513671875, \"sum\": 82.02801513671875, \"min\": 82.02801513671875}}, \"EndTime\": 1596636220.739081, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.739064}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 91.51193673270089, \"sum\": 91.51193673270089, \"min\": 91.51193673270089}}, \"EndTime\": 1596636220.739139, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.739123}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 82.22247314453125, \"sum\": 82.22247314453125, \"min\": 82.22247314453125}}, \"EndTime\": 1596636220.739196, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.739179}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #quality_metric: host=algo-1, epoch=5, validation mse_objective <loss>=61.7859584263\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #early_stopping_criteria_metric: host=algo-1, epoch=5, criteria=mse_objective, value=35.5332750593\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Saving model for epoch: 5\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Saved checkpoint to \"/tmp/tmpUzKsmm/mx-mod-0000.params\"\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #progress_metric: host=algo-1, completed 40 % of epochs\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Total Batches Seen\": {\"count\": 1, \"max\": 15, \"sum\": 15.0, \"min\": 15}, \"Total Records Seen\": {\"count\": 1, \"max\": 1789, \"sum\": 1789.0, \"min\": 1789}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Reset Count\": {\"count\": 1, \"max\": 8, \"sum\": 8.0, \"min\": 8}}, \"EndTime\": 1596636220.744493, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 5}, \"StartTime\": 1596636220.679285}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #throughput_metric: host=algo-1, train throughput=3474.83041303 records/second\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6806546783447266, \"sum\": 0.6806546783447266, \"min\": 0.6806546783447266}}, \"EndTime\": 1596636220.765289, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.765238}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6332957458496093, \"sum\": 0.6332957458496093, \"min\": 0.6332957458496093}}, \"EndTime\": 1596636220.765352, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.76534}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9548780059814453, \"sum\": 0.9548780059814453, \"min\": 0.9548780059814453}}, \"EndTime\": 1596636220.765414, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.765397}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6220399856567382, \"sum\": 0.6220399856567382, \"min\": 0.6220399856567382}}, \"EndTime\": 1596636220.765489, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.76547}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3992182922363281, \"sum\": 0.3992182922363281, \"min\": 0.3992182922363281}}, \"EndTime\": 1596636220.765558, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.76554}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.37072269439697264, \"sum\": 0.37072269439697264, \"min\": 0.37072269439697264}}, \"EndTime\": 1596636220.765626, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.765609}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3816422653198242, \"sum\": 0.3816422653198242, \"min\": 0.3816422653198242}}, \"EndTime\": 1596636220.765719, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.765701}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3866064834594727, \"sum\": 0.3866064834594727, \"min\": 0.3866064834594727}}, \"EndTime\": 1596636220.765779, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.765763}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6962950897216796, \"sum\": 0.6962950897216796, \"min\": 0.6962950897216796}}, \"EndTime\": 1596636220.76584, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.765823}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.027069320678711, \"sum\": 1.027069320678711, \"min\": 1.027069320678711}}, \"EndTime\": 1596636220.765909, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.765892}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6476103973388672, \"sum\": 0.6476103973388672, \"min\": 0.6476103973388672}}, \"EndTime\": 1596636220.765975, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.765957}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.81970947265625, \"sum\": 0.81970947265625, \"min\": 0.81970947265625}}, \"EndTime\": 1596636220.766036, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.766019}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.41191226959228516, \"sum\": 0.41191226959228516, \"min\": 0.41191226959228516}}, \"EndTime\": 1596636220.766094, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.766077}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.40338619232177736, \"sum\": 0.40338619232177736, \"min\": 0.40338619232177736}}, \"EndTime\": 1596636220.766156, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.76614}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4088611221313477, \"sum\": 0.4088611221313477, \"min\": 0.4088611221313477}}, \"EndTime\": 1596636220.76622, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.766203}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3951526641845703, \"sum\": 0.3951526641845703, \"min\": 0.3951526641845703}}, \"EndTime\": 1596636220.766282, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.766264}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7491928863525391, \"sum\": 0.7491928863525391, \"min\": 0.7491928863525391}}, \"EndTime\": 1596636220.766341, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.766326}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6454884338378907, \"sum\": 0.6454884338378907, \"min\": 0.6454884338378907}}, \"EndTime\": 1596636220.766403, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.766386}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.898250961303711, \"sum\": 0.898250961303711, \"min\": 0.898250961303711}}, \"EndTime\": 1596636220.766461, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.766444}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7535623168945312, \"sum\": 0.7535623168945312, \"min\": 0.7535623168945312}}, \"EndTime\": 1596636220.76652, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.766502}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.40097389221191404, \"sum\": 0.40097389221191404, \"min\": 0.40097389221191404}}, \"EndTime\": 1596636220.766578, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.766562}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5031251525878906, \"sum\": 0.5031251525878906, \"min\": 0.5031251525878906}}, \"EndTime\": 1596636220.766644, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.766627}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.44235347747802733, \"sum\": 0.44235347747802733, \"min\": 0.44235347747802733}}, \"EndTime\": 1596636220.766704, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.766687}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4400681686401367, \"sum\": 0.4400681686401367, \"min\": 0.4400681686401367}}, \"EndTime\": 1596636220.766767, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.766749}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9602164459228516, \"sum\": 0.9602164459228516, \"min\": 0.9602164459228516}}, \"EndTime\": 1596636220.766826, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.76681}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.963577880859375, \"sum\": 0.963577880859375, \"min\": 0.963577880859375}}, \"EndTime\": 1596636220.766885, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.766868}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0021926879882812, \"sum\": 1.0021926879882812, \"min\": 1.0021926879882812}}, \"EndTime\": 1596636220.766945, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.766928}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0254791259765625, \"sum\": 1.0254791259765625, \"min\": 1.0254791259765625}}, \"EndTime\": 1596636220.767002, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.766986}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.026239776611328, \"sum\": 1.026239776611328, \"min\": 1.026239776611328}}, \"EndTime\": 1596636220.767063, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.767046}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9987367248535156, \"sum\": 0.9987367248535156, \"min\": 0.9987367248535156}}, \"EndTime\": 1596636220.767124, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.767108}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.10829833984375, \"sum\": 1.10829833984375, \"min\": 1.10829833984375}}, \"EndTime\": 1596636220.767183, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.767166}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0416168212890624, \"sum\": 1.0416168212890624, \"min\": 1.0416168212890624}}, \"EndTime\": 1596636220.767242, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.767225}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #quality_metric: host=algo-1, epoch=6, train mse_objective <loss>=0.680654678345\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 59.629198346819194, \"sum\": 59.629198346819194, \"min\": 59.629198346819194}}, \"EndTime\": 1596636220.805544, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.805486}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 54.786673409598215, \"sum\": 54.786673409598215, \"min\": 54.786673409598215}}, \"EndTime\": 1596636220.805617, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.805604}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 75.01809256417411, \"sum\": 75.01809256417411, \"min\": 75.01809256417411}}, \"EndTime\": 1596636220.805705, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.805687}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 56.11993408203125, \"sum\": 56.11993408203125, \"min\": 56.11993408203125}}, \"EndTime\": 1596636220.80577, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.805752}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 31.247674124581472, \"sum\": 31.247674124581472, \"min\": 31.247674124581472}}, \"EndTime\": 1596636220.805832, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.805816}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 39.964019775390625, \"sum\": 39.964019775390625, \"min\": 39.964019775390625}}, \"EndTime\": 1596636220.805893, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.805876}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 40.87135968889509, \"sum\": 40.87135968889509, \"min\": 40.87135968889509}}, \"EndTime\": 1596636220.805954, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.805936}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 41.726405552455354, \"sum\": 41.726405552455354, \"min\": 41.726405552455354}}, \"EndTime\": 1596636220.806016, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.806}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 56.996207101004465, \"sum\": 56.996207101004465, \"min\": 56.996207101004465}}, \"EndTime\": 1596636220.806077, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.806059}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 75.99486432756696, \"sum\": 75.99486432756696, \"min\": 75.99486432756696}}, \"EndTime\": 1596636220.806142, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.806125}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 56.37113734654018, \"sum\": 56.37113734654018, \"min\": 56.37113734654018}}, \"EndTime\": 1596636220.806201, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.806184}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 64.63766043526786, \"sum\": 64.63766043526786, \"min\": 64.63766043526786}}, \"EndTime\": 1596636220.806262, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.806245}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 38.88494873046875, \"sum\": 38.88494873046875, \"min\": 38.88494873046875}}, \"EndTime\": 1596636220.806314, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.806303}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 35.69530814034598, \"sum\": 35.69530814034598, \"min\": 35.69530814034598}}, \"EndTime\": 1596636220.806346, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.806338}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 35.12401689801897, \"sum\": 35.12401689801897, \"min\": 35.12401689801897}}, \"EndTime\": 1596636220.806398, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.806382}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 42.04461669921875, \"sum\": 42.04461669921875, \"min\": 42.04461669921875}}, \"EndTime\": 1596636220.806456, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.80644}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 60.455269949776785, \"sum\": 60.455269949776785, \"min\": 60.455269949776785}}, \"EndTime\": 1596636220.806511, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.8065}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 53.881295340401785, \"sum\": 53.881295340401785, \"min\": 53.881295340401785}}, \"EndTime\": 1596636220.806553, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.806539}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 75.26795305524554, \"sum\": 75.26795305524554, \"min\": 75.26795305524554}}, \"EndTime\": 1596636220.806614, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.806597}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 64.84247262137276, \"sum\": 64.84247262137276, \"min\": 64.84247262137276}}, \"EndTime\": 1596636220.806675, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.806658}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 27.450618198939733, \"sum\": 27.450618198939733, \"min\": 27.450618198939733}}, \"EndTime\": 1596636220.806734, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.806718}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 46.432141985212056, \"sum\": 46.432141985212056, \"min\": 46.432141985212056}}, \"EndTime\": 1596636220.80679, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.806775}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 30.69338117327009, \"sum\": 30.69338117327009, \"min\": 30.69338117327009}}, \"EndTime\": 1596636220.806846, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.806831}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 33.5838383265904, \"sum\": 33.5838383265904, \"min\": 33.5838383265904}}, \"EndTime\": 1596636220.806904, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.806889}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 81.37229701450893, \"sum\": 81.37229701450893, \"min\": 81.37229701450893}}, \"EndTime\": 1596636220.806964, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.806948}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 79.45038713727679, \"sum\": 79.45038713727679, \"min\": 79.45038713727679}}, \"EndTime\": 1596636220.807024, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.807007}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 83.39445277622768, \"sum\": 83.39445277622768, \"min\": 83.39445277622768}}, \"EndTime\": 1596636220.80708, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.807064}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 81.42476981026786, \"sum\": 81.42476981026786, \"min\": 81.42476981026786}}, \"EndTime\": 1596636220.80714, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.807125}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 83.47078159877232, \"sum\": 83.47078159877232, \"min\": 83.47078159877232}}, \"EndTime\": 1596636220.807206, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.80719}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 89.97181919642857, \"sum\": 89.97181919642857, \"min\": 89.97181919642857}}, \"EndTime\": 1596636220.807264, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.807248}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 84.62418910435268, \"sum\": 84.62418910435268, \"min\": 84.62418910435268}}, \"EndTime\": 1596636220.807328, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.807312}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 79.39166259765625, \"sum\": 79.39166259765625, \"min\": 79.39166259765625}}, \"EndTime\": 1596636220.807382, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.807367}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #quality_metric: host=algo-1, epoch=6, validation mse_objective <loss>=59.6291983468\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #early_stopping_criteria_metric: host=algo-1, epoch=6, criteria=mse_objective, value=27.4506181989\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Epoch 6: Loss improved. Updating best model\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Saving model for epoch: 6\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Saved checkpoint to \"/tmp/tmpb8RcU5/mx-mod-0000.params\"\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #progress_metric: host=algo-1, completed 46 % of epochs\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Total Batches Seen\": {\"count\": 1, \"max\": 17, \"sum\": 17.0, \"min\": 17}, \"Total Records Seen\": {\"count\": 1, \"max\": 2016, \"sum\": 2016.0, \"min\": 2016}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Reset Count\": {\"count\": 1, \"max\": 9, \"sum\": 9.0, \"min\": 9}}, \"EndTime\": 1596636220.815143, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 6}, \"StartTime\": 1596636220.744753}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #throughput_metric: host=algo-1, train throughput=3219.57977033 records/second\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6472696685791015, \"sum\": 0.6472696685791015, \"min\": 0.6472696685791015}}, \"EndTime\": 1596636220.836628, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.836562}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6064985656738281, \"sum\": 0.6064985656738281, \"min\": 0.6064985656738281}}, \"EndTime\": 1596636220.83671, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.836692}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9012408447265625, \"sum\": 0.9012408447265625, \"min\": 0.9012408447265625}}, \"EndTime\": 1596636220.836785, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.836766}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5957080841064453, \"sum\": 0.5957080841064453, \"min\": 0.5957080841064453}}, \"EndTime\": 1596636220.836841, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.836824}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3103369140625, \"sum\": 0.3103369140625, \"min\": 0.3103369140625}}, \"EndTime\": 1596636220.836898, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.836882}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.41696308135986326, \"sum\": 0.41696308135986326, \"min\": 0.41696308135986326}}, \"EndTime\": 1596636220.836957, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.836941}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4131605911254883, \"sum\": 0.4131605911254883, \"min\": 0.4131605911254883}}, \"EndTime\": 1596636220.837021, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.837005}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4087186813354492, \"sum\": 0.4087186813354492, \"min\": 0.4087186813354492}}, \"EndTime\": 1596636220.837083, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.837066}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6697970581054687, \"sum\": 0.6697970581054687, \"min\": 0.6697970581054687}}, \"EndTime\": 1596636220.837146, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.837129}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9696914672851562, \"sum\": 0.9696914672851562, \"min\": 0.9696914672851562}}, \"EndTime\": 1596636220.837209, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.837192}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.624402961730957, \"sum\": 0.624402961730957, \"min\": 0.624402961730957}}, \"EndTime\": 1596636220.837271, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.837254}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7801077270507812, \"sum\": 0.7801077270507812, \"min\": 0.7801077270507812}}, \"EndTime\": 1596636220.837335, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.837318}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.38387447357177734, \"sum\": 0.38387447357177734, \"min\": 0.38387447357177734}}, \"EndTime\": 1596636220.837392, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.837375}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3686491012573242, \"sum\": 0.3686491012573242, \"min\": 0.3686491012573242}}, \"EndTime\": 1596636220.837455, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.837438}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.31972253799438477, \"sum\": 0.31972253799438477, \"min\": 0.31972253799438477}}, \"EndTime\": 1596636220.837518, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.8375}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.41109725952148435, \"sum\": 0.41109725952148435, \"min\": 0.41109725952148435}}, \"EndTime\": 1596636220.837581, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.837565}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7127925872802734, \"sum\": 0.7127925872802734, \"min\": 0.7127925872802734}}, \"EndTime\": 1596636220.837631, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.837618}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6204463958740234, \"sum\": 0.6204463958740234, \"min\": 0.6204463958740234}}, \"EndTime\": 1596636220.837701, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.837684}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8567723083496094, \"sum\": 0.8567723083496094, \"min\": 0.8567723083496094}}, \"EndTime\": 1596636220.837766, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.837748}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7193704986572266, \"sum\": 0.7193704986572266, \"min\": 0.7193704986572266}}, \"EndTime\": 1596636220.837829, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.837813}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.27159631729125977, \"sum\": 0.27159631729125977, \"min\": 0.27159631729125977}}, \"EndTime\": 1596636220.83789, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.837874}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.49399131774902344, \"sum\": 0.49399131774902344, \"min\": 0.49399131774902344}}, \"EndTime\": 1596636220.837948, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.837933}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.2927430725097656, \"sum\": 0.2927430725097656, \"min\": 0.2927430725097656}}, \"EndTime\": 1596636220.838015, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.837997}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3184643173217773, \"sum\": 0.3184643173217773, \"min\": 0.3184643173217773}}, \"EndTime\": 1596636220.838072, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.838056}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9692801666259766, \"sum\": 0.9692801666259766, \"min\": 0.9692801666259766}}, \"EndTime\": 1596636220.838127, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.838116}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9611419677734375, \"sum\": 0.9611419677734375, \"min\": 0.9611419677734375}}, \"EndTime\": 1596636220.838174, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.83816}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9795201110839844, \"sum\": 0.9795201110839844, \"min\": 0.9795201110839844}}, \"EndTime\": 1596636220.838242, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.838224}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.000109634399414, \"sum\": 1.000109634399414, \"min\": 1.000109634399414}}, \"EndTime\": 1596636220.838315, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.838296}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9939197540283203, \"sum\": 0.9939197540283203, \"min\": 0.9939197540283203}}, \"EndTime\": 1596636220.838386, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.838369}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0689402770996095, \"sum\": 1.0689402770996095, \"min\": 1.0689402770996095}}, \"EndTime\": 1596636220.838458, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.83844}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0159805297851563, \"sum\": 1.0159805297851563, \"min\": 1.0159805297851563}}, \"EndTime\": 1596636220.838523, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.838507}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9546951293945313, \"sum\": 0.9546951293945313, \"min\": 0.9546951293945313}}, \"EndTime\": 1596636220.838584, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.838566}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #quality_metric: host=algo-1, epoch=7, train mse_objective <loss>=0.647269668579\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 57.765799386160715, \"sum\": 57.765799386160715, \"min\": 57.765799386160715}}, \"EndTime\": 1596636220.88309, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.882999}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 53.657095772879465, \"sum\": 53.657095772879465, \"min\": 53.657095772879465}}, \"EndTime\": 1596636220.883189, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.883171}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 71.68527221679688, \"sum\": 71.68527221679688, \"min\": 71.68527221679688}}, \"EndTime\": 1596636220.88325, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.883235}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 54.81501988002232, \"sum\": 54.81501988002232, \"min\": 54.81501988002232}}, \"EndTime\": 1596636220.883308, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.883294}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 38.58768572126116, \"sum\": 38.58768572126116, \"min\": 38.58768572126116}}, \"EndTime\": 1596636220.883362, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.883347}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 47.50529261997768, \"sum\": 47.50529261997768, \"min\": 47.50529261997768}}, \"EndTime\": 1596636220.883413, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.8834}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 47.638802664620535, \"sum\": 47.638802664620535, \"min\": 47.638802664620535}}, \"EndTime\": 1596636220.883461, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.883448}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 47.296029227120535, \"sum\": 47.296029227120535, \"min\": 47.296029227120535}}, \"EndTime\": 1596636220.883512, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.8835}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 56.03289794921875, \"sum\": 56.03289794921875, \"min\": 56.03289794921875}}, \"EndTime\": 1596636220.883562, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.883549}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 72.3926522391183, \"sum\": 72.3926522391183, \"min\": 72.3926522391183}}, \"EndTime\": 1596636220.883611, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.883597}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 55.35247366768973, \"sum\": 55.35247366768973, \"min\": 55.35247366768973}}, \"EndTime\": 1596636220.883666, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.883651}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 62.724391392299104, \"sum\": 62.724391392299104, \"min\": 62.724391392299104}}, \"EndTime\": 1596636220.88372, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.883706}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 43.32515607561384, \"sum\": 43.32515607561384, \"min\": 43.32515607561384}}, \"EndTime\": 1596636220.883777, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.883761}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 39.593453543526785, \"sum\": 39.593453543526785, \"min\": 39.593453543526785}}, \"EndTime\": 1596636220.883831, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.883817}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 41.507799421037944, \"sum\": 41.507799421037944, \"min\": 41.507799421037944}}, \"EndTime\": 1596636220.883885, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.883871}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 48.465955461774556, \"sum\": 48.465955461774556, \"min\": 48.465955461774556}}, \"EndTime\": 1596636220.883943, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.883926}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 58.396754673549104, \"sum\": 58.396754673549104, \"min\": 58.396754673549104}}, \"EndTime\": 1596636220.883997, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.883984}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 52.83513968331473, \"sum\": 52.83513968331473, \"min\": 52.83513968331473}}, \"EndTime\": 1596636220.884054, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.88404}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 72.90969848632812, \"sum\": 72.90969848632812, \"min\": 72.90969848632812}}, \"EndTime\": 1596636220.884107, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.884093}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 62.92071533203125, \"sum\": 62.92071533203125, \"min\": 62.92071533203125}}, \"EndTime\": 1596636220.884159, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.884145}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 38.499925885881694, \"sum\": 38.499925885881694, \"min\": 38.499925885881694}}, \"EndTime\": 1596636220.884214, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.884197}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 48.436453683035715, \"sum\": 48.436453683035715, \"min\": 48.436453683035715}}, \"EndTime\": 1596636220.884274, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.884259}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 41.26151384626116, \"sum\": 41.26151384626116, \"min\": 41.26151384626116}}, \"EndTime\": 1596636220.884327, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.884314}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 40.39020211356027, \"sum\": 40.39020211356027, \"min\": 40.39020211356027}}, \"EndTime\": 1596636220.884381, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.884367}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 82.00408063616071, \"sum\": 82.00408063616071, \"min\": 82.00408063616071}}, \"EndTime\": 1596636220.884433, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.884419}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 79.26824951171875, \"sum\": 79.26824951171875, \"min\": 79.26824951171875}}, \"EndTime\": 1596636220.884484, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.884471}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 81.60069056919643, \"sum\": 81.60069056919643, \"min\": 81.60069056919643}}, \"EndTime\": 1596636220.884538, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.884524}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 80.24459402901786, \"sum\": 80.24459402901786, \"min\": 80.24459402901786}}, \"EndTime\": 1596636220.884591, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.884577}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 93.00532749720982, \"sum\": 93.00532749720982, \"min\": 93.00532749720982}}, \"EndTime\": 1596636220.884644, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.88463}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 104.23214285714286, \"sum\": 104.23214285714286, \"min\": 104.23214285714286}}, \"EndTime\": 1596636220.884695, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.884682}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 85.68537248883929, \"sum\": 85.68537248883929, \"min\": 85.68537248883929}}, \"EndTime\": 1596636220.884747, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.884734}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 86.36607142857143, \"sum\": 86.36607142857143, \"min\": 86.36607142857143}}, \"EndTime\": 1596636220.884799, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.884785}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #quality_metric: host=algo-1, epoch=7, validation mse_objective <loss>=57.7657993862\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #early_stopping_criteria_metric: host=algo-1, epoch=7, criteria=mse_objective, value=38.4999258859\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Saving model for epoch: 7\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Saved checkpoint to \"/tmp/tmpH0QqMs/mx-mod-0000.params\"\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #progress_metric: host=algo-1, completed 53 % of epochs\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Total Batches Seen\": {\"count\": 1, \"max\": 19, \"sum\": 19.0, \"min\": 19}, \"Total Records Seen\": {\"count\": 1, \"max\": 2243, \"sum\": 2243.0, \"min\": 2243}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Reset Count\": {\"count\": 1, \"max\": 10, \"sum\": 10.0, \"min\": 10}}, \"EndTime\": 1596636220.891408, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 7}, \"StartTime\": 1596636220.815406}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #throughput_metric: host=algo-1, train throughput=2981.73581408 records/second\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6179273986816406, \"sum\": 0.6179273986816406, \"min\": 0.6179273986816406}}, \"EndTime\": 1596636220.918744, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.918687}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5835232925415039, \"sum\": 0.5835232925415039, \"min\": 0.5835232925415039}}, \"EndTime\": 1596636220.918819, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.9188}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8518276977539062, \"sum\": 0.8518276977539062, \"min\": 0.8518276977539062}}, \"EndTime\": 1596636220.918887, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.91887}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5732080078125, \"sum\": 0.5732080078125, \"min\": 0.5732080078125}}, \"EndTime\": 1596636220.91894, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.918924}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.38469085693359373, \"sum\": 0.38469085693359373, \"min\": 0.38469085693359373}}, \"EndTime\": 1596636220.919005, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.918988}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.48399173736572265, \"sum\": 0.48399173736572265, \"min\": 0.48399173736572265}}, \"EndTime\": 1596636220.919068, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.919052}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4762537384033203, \"sum\": 0.4762537384033203, \"min\": 0.4762537384033203}}, \"EndTime\": 1596636220.919104, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.919095}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.46173828125, \"sum\": 0.46173828125, \"min\": 0.46173828125}}, \"EndTime\": 1596636220.919162, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.919145}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6469898986816406, \"sum\": 0.6469898986816406, \"min\": 0.6469898986816406}}, \"EndTime\": 1596636220.919231, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.919213}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9165348052978516, \"sum\": 0.9165348052978516, \"min\": 0.9165348052978516}}, \"EndTime\": 1596636220.919301, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.919284}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6044857788085938, \"sum\": 0.6044857788085938, \"min\": 0.6044857788085938}}, \"EndTime\": 1596636220.919363, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.919346}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7445723724365234, \"sum\": 0.7445723724365234, \"min\": 0.7445723724365234}}, \"EndTime\": 1596636220.919435, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.919417}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4252370452880859, \"sum\": 0.4252370452880859, \"min\": 0.4252370452880859}}, \"EndTime\": 1596636220.919506, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.919489}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4045654296875, \"sum\": 0.4045654296875, \"min\": 0.4045654296875}}, \"EndTime\": 1596636220.919576, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.919558}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3849892425537109, \"sum\": 0.3849892425537109, \"min\": 0.3849892425537109}}, \"EndTime\": 1596636220.919648, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.91963}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4692980194091797, \"sum\": 0.4692980194091797, \"min\": 0.4692980194091797}}, \"EndTime\": 1596636220.919713, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.919695}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6805221557617187, \"sum\": 0.6805221557617187, \"min\": 0.6805221557617187}}, \"EndTime\": 1596636220.919757, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.919748}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5980792617797852, \"sum\": 0.5980792617797852, \"min\": 0.5980792617797852}}, \"EndTime\": 1596636220.919811, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.919799}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.819438705444336, \"sum\": 0.819438705444336, \"min\": 0.819438705444336}}, \"EndTime\": 1596636220.919869, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.919852}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6891783905029297, \"sum\": 0.6891783905029297, \"min\": 0.6891783905029297}}, \"EndTime\": 1596636220.919934, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.919917}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.36319908142089846, \"sum\": 0.36319908142089846, \"min\": 0.36319908142089846}}, \"EndTime\": 1596636220.919992, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.919975}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.48809139251708983, \"sum\": 0.48809139251708983, \"min\": 0.48809139251708983}}, \"EndTime\": 1596636220.920052, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.920036}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3715544891357422, \"sum\": 0.3715544891357422, \"min\": 0.3715544891357422}}, \"EndTime\": 1596636220.920113, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.920097}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3640948486328125, \"sum\": 0.3640948486328125, \"min\": 0.3640948486328125}}, \"EndTime\": 1596636220.920172, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.920156}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9786405181884765, \"sum\": 0.9786405181884765, \"min\": 0.9786405181884765}}, \"EndTime\": 1596636220.920226, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.920215}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9589925384521485, \"sum\": 0.9589925384521485, \"min\": 0.9589925384521485}}, \"EndTime\": 1596636220.92028, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.920265}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9617025756835937, \"sum\": 0.9617025756835937, \"min\": 0.9617025756835937}}, \"EndTime\": 1596636220.920347, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.920329}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.978107681274414, \"sum\": 0.978107681274414, \"min\": 0.978107681274414}}, \"EndTime\": 1596636220.920407, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.92039}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0625218200683593, \"sum\": 1.0625218200683593, \"min\": 1.0625218200683593}}, \"EndTime\": 1596636220.920459, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.920446}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.211951370239258, \"sum\": 1.211951370239258, \"min\": 1.211951370239258}}, \"EndTime\": 1596636220.920517, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.920501}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0189936065673828, \"sum\": 1.0189936065673828, \"min\": 1.0189936065673828}}, \"EndTime\": 1596636220.920573, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.920563}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9970122528076172, \"sum\": 0.9970122528076172, \"min\": 0.9970122528076172}}, \"EndTime\": 1596636220.920624, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.920609}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #quality_metric: host=algo-1, epoch=8, train mse_objective <loss>=0.617927398682\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 56.065992082868306, \"sum\": 56.065992082868306, \"min\": 56.065992082868306}}, \"EndTime\": 1596636220.962324, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.962228}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 52.659537179129465, \"sum\": 52.659537179129465, \"min\": 52.659537179129465}}, \"EndTime\": 1596636220.96244, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.962417}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 68.52394321986607, \"sum\": 68.52394321986607, \"min\": 68.52394321986607}}, \"EndTime\": 1596636220.962512, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.962492}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 53.652234758649556, \"sum\": 53.652234758649556, \"min\": 53.652234758649556}}, \"EndTime\": 1596636220.96258, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.96256}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 43.99922398158482, \"sum\": 43.99922398158482, \"min\": 43.99922398158482}}, \"EndTime\": 1596636220.962646, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.962629}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 42.730564662388396, \"sum\": 42.730564662388396, \"min\": 42.730564662388396}}, \"EndTime\": 1596636220.96271, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.962692}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 43.19488525390625, \"sum\": 43.19488525390625, \"min\": 43.19488525390625}}, \"EndTime\": 1596636220.962804, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.962786}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 42.13034929547991, \"sum\": 42.13034929547991, \"min\": 42.13034929547991}}, \"EndTime\": 1596636220.96287, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.962852}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 55.161891392299104, \"sum\": 55.161891392299104, \"min\": 55.161891392299104}}, \"EndTime\": 1596636220.962933, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.962917}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 68.99211774553571, \"sum\": 68.99211774553571, \"min\": 68.99211774553571}}, \"EndTime\": 1596636220.962996, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.962978}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 54.38718523297991, \"sum\": 54.38718523297991, \"min\": 54.38718523297991}}, \"EndTime\": 1596636220.963058, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.963041}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 60.949310302734375, \"sum\": 60.949310302734375, \"min\": 60.949310302734375}}, \"EndTime\": 1596636220.963124, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.963107}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 41.94814191545759, \"sum\": 41.94814191545759, \"min\": 41.94814191545759}}, \"EndTime\": 1596636220.963188, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.96317}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 39.2034912109375, \"sum\": 39.2034912109375, \"min\": 39.2034912109375}}, \"EndTime\": 1596636220.96325, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.963233}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 44.75896780831473, \"sum\": 44.75896780831473, \"min\": 44.75896780831473}}, \"EndTime\": 1596636220.963313, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.963296}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 44.95074026925223, \"sum\": 44.95074026925223, \"min\": 44.95074026925223}}, \"EndTime\": 1596636220.963374, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.963358}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 56.522731236049104, \"sum\": 56.522731236049104, \"min\": 56.522731236049104}}, \"EndTime\": 1596636220.963436, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.963419}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 51.76674107142857, \"sum\": 51.76674107142857, \"min\": 51.76674107142857}}, \"EndTime\": 1596636220.9635, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.963483}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 70.69611467633929, \"sum\": 70.69611467633929, \"min\": 70.69611467633929}}, \"EndTime\": 1596636220.96356, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.963543}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 61.159881591796875, \"sum\": 61.159881591796875, \"min\": 61.159881591796875}}, \"EndTime\": 1596636220.963622, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.963605}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 43.84314400809152, \"sum\": 43.84314400809152, \"min\": 43.84314400809152}}, \"EndTime\": 1596636220.963685, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.963668}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 41.56929670061384, \"sum\": 41.56929670061384, \"min\": 41.56929670061384}}, \"EndTime\": 1596636220.963746, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.96373}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 48.31520734514509, \"sum\": 48.31520734514509, \"min\": 48.31520734514509}}, \"EndTime\": 1596636220.963807, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.963789}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 44.06783185686384, \"sum\": 44.06783185686384, \"min\": 44.06783185686384}}, \"EndTime\": 1596636220.963869, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.963852}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 82.41592843191964, \"sum\": 82.41592843191964, \"min\": 82.41592843191964}}, \"EndTime\": 1596636220.963929, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.963913}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 79.02143205915179, \"sum\": 79.02143205915179, \"min\": 79.02143205915179}}, \"EndTime\": 1596636220.96399, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.963973}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 80.01629638671875, \"sum\": 80.01629638671875, \"min\": 80.01629638671875}}, \"EndTime\": 1596636220.964052, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.964035}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 79.26613071986607, \"sum\": 79.26613071986607, \"min\": 79.26613071986607}}, \"EndTime\": 1596636220.964115, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.964098}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 91.53104945591518, \"sum\": 91.53104945591518, \"min\": 91.53104945591518}}, \"EndTime\": 1596636220.964176, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.96416}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 101.42341831752232, \"sum\": 101.42341831752232, \"min\": 101.42341831752232}}, \"EndTime\": 1596636220.96424, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.964222}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 79.98856026785714, \"sum\": 79.98856026785714, \"min\": 79.98856026785714}}, \"EndTime\": 1596636220.964303, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.964286}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 86.36410958426339, \"sum\": 86.36410958426339, \"min\": 86.36410958426339}}, \"EndTime\": 1596636220.964363, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.964346}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #quality_metric: host=algo-1, epoch=8, validation mse_objective <loss>=56.0659920829\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #early_stopping_criteria_metric: host=algo-1, epoch=8, criteria=mse_objective, value=39.2034912109\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Saving model for epoch: 8\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] Saved checkpoint to \"/tmp/tmpMqZF4G/mx-mod-0000.params\"\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #progress_metric: host=algo-1, completed 60 % of epochs\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Total Batches Seen\": {\"count\": 1, \"max\": 21, \"sum\": 21.0, \"min\": 21}, \"Total Records Seen\": {\"count\": 1, \"max\": 2470, \"sum\": 2470.0, \"min\": 2470}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Reset Count\": {\"count\": 1, \"max\": 11, \"sum\": 11.0, \"min\": 11}}, \"EndTime\": 1596636220.970818, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 8}, \"StartTime\": 1596636220.89168}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #throughput_metric: host=algo-1, train throughput=2863.34193843 records/second\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5919583511352539, \"sum\": 0.5919583511352539, \"min\": 0.5919583511352539}}, \"EndTime\": 1596636220.997759, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.997693}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5635419845581054, \"sum\": 0.5635419845581054, \"min\": 0.5635419845581054}}, \"EndTime\": 1596636220.997834, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.997821}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8061096954345703, \"sum\": 0.8061096954345703, \"min\": 0.8061096954345703}}, \"EndTime\": 1596636220.997889, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.997874}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5537911987304688, \"sum\": 0.5537911987304688, \"min\": 0.5537911987304688}}, \"EndTime\": 1596636220.997957, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.997941}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4384043884277344, \"sum\": 0.4384043884277344, \"min\": 0.4384043884277344}}, \"EndTime\": 1596636220.998022, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.998004}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4180921173095703, \"sum\": 0.4180921173095703, \"min\": 0.4180921173095703}}, \"EndTime\": 1596636220.998085, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.998066}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.42144824981689455, \"sum\": 0.42144824981689455, \"min\": 0.42144824981689455}}, \"EndTime\": 1596636220.998148, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.998131}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.40610374450683595, \"sum\": 0.40610374450683595, \"min\": 0.40610374450683595}}, \"EndTime\": 1596636220.998213, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.998196}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6271327590942383, \"sum\": 0.6271327590942383, \"min\": 0.6271327590942383}}, \"EndTime\": 1596636220.998281, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.998262}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8671025848388672, \"sum\": 0.8671025848388672, \"min\": 0.8671025848388672}}, \"EndTime\": 1596636220.998345, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.998328}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5870054626464843, \"sum\": 0.5870054626464843, \"min\": 0.5870054626464843}}, \"EndTime\": 1596636220.998417, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.9984}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7124393463134766, \"sum\": 0.7124393463134766, \"min\": 0.7124393463134766}}, \"EndTime\": 1596636220.99848, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.998463}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.40952266693115236, \"sum\": 0.40952266693115236, \"min\": 0.40952266693115236}}, \"EndTime\": 1596636220.998544, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.998528}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.39326107025146484, \"sum\": 0.39326107025146484, \"min\": 0.39326107025146484}}, \"EndTime\": 1596636220.998611, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.998594}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4243130111694336, \"sum\": 0.4243130111694336, \"min\": 0.4243130111694336}}, \"EndTime\": 1596636220.998672, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.998657}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.42252838134765625, \"sum\": 0.42252838134765625, \"min\": 0.42252838134765625}}, \"EndTime\": 1596636220.99874, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.998722}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6517580413818359, \"sum\": 0.6517580413818359, \"min\": 0.6517580413818359}}, \"EndTime\": 1596636220.998806, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.998787}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5776651000976563, \"sum\": 0.5776651000976563, \"min\": 0.5776651000976563}}, \"EndTime\": 1596636220.998869, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.998851}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7856255340576171, \"sum\": 0.7856255340576171, \"min\": 0.7856255340576171}}, \"EndTime\": 1596636220.998929, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.998912}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6622343444824219, \"sum\": 0.6622343444824219, \"min\": 0.6622343444824219}}, \"EndTime\": 1596636220.998993, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.998977}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.40700454711914064, \"sum\": 0.40700454711914064, \"min\": 0.40700454711914064}}, \"EndTime\": 1596636220.999064, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.999046}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3918832015991211, \"sum\": 0.3918832015991211, \"min\": 0.3918832015991211}}, \"EndTime\": 1596636220.99913, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.999112}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.43002010345458985, \"sum\": 0.43002010345458985, \"min\": 0.43002010345458985}}, \"EndTime\": 1596636220.999191, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.999175}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3909777069091797, \"sum\": 0.3909777069091797, \"min\": 0.3909777069091797}}, \"EndTime\": 1596636220.999254, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.999237}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9862805938720703, \"sum\": 0.9862805938720703, \"min\": 0.9862805938720703}}, \"EndTime\": 1596636220.999315, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.999298}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9568215942382813, \"sum\": 0.9568215942382813, \"min\": 0.9568215942382813}}, \"EndTime\": 1596636220.999375, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.999359}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9472815704345703, \"sum\": 0.9472815704345703, \"min\": 0.9472815704345703}}, \"EndTime\": 1596636220.999441, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.999424}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9603143310546876, \"sum\": 0.9603143310546876, \"min\": 0.9603143310546876}}, \"EndTime\": 1596636220.999502, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.999487}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0393961334228516, \"sum\": 1.0393961334228516, \"min\": 1.0393961334228516}}, \"EndTime\": 1596636220.999565, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.999549}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.1843854522705077, \"sum\": 1.1843854522705077, \"min\": 1.1843854522705077}}, \"EndTime\": 1596636220.999627, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.999608}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9592027282714843, \"sum\": 0.9592027282714843, \"min\": 0.9592027282714843}}, \"EndTime\": 1596636220.99969, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.999673}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9936135864257812, \"sum\": 0.9936135864257812, \"min\": 0.9936135864257812}}, \"EndTime\": 1596636220.999752, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.999736}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:40 INFO 139620994029376] #quality_metric: host=algo-1, epoch=9, train mse_objective <loss>=0.591958351135\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 54.52069527762277, \"sum\": 54.52069527762277, \"min\": 54.52069527762277}}, \"EndTime\": 1596636221.046535, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.046473}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 51.753383091517854, \"sum\": 51.753383091517854, \"min\": 51.753383091517854}}, \"EndTime\": 1596636221.046613, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.046599}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 65.55251639229911, \"sum\": 65.55251639229911, \"min\": 65.55251639229911}}, \"EndTime\": 1596636221.046675, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.046658}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 52.61360822405134, \"sum\": 52.61360822405134, \"min\": 52.61360822405134}}, \"EndTime\": 1596636221.046743, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.046726}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 40.443189348493306, \"sum\": 40.443189348493306, \"min\": 40.443189348493306}}, \"EndTime\": 1596636221.046808, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.04679}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 33.03868103027344, \"sum\": 33.03868103027344, \"min\": 33.03868103027344}}, \"EndTime\": 1596636221.046871, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.046854}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 34.64771597726004, \"sum\": 34.64771597726004, \"min\": 34.64771597726004}}, \"EndTime\": 1596636221.04692, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.04691}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 33.96677507672991, \"sum\": 33.96677507672991, \"min\": 33.96677507672991}}, \"EndTime\": 1596636221.046957, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.046949}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 54.37347848074777, \"sum\": 54.37347848074777, \"min\": 54.37347848074777}}, \"EndTime\": 1596636221.047009, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.046998}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 65.81076921735492, \"sum\": 65.81076921735492, \"min\": 65.81076921735492}}, \"EndTime\": 1596636221.047062, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.047051}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 53.46570260184152, \"sum\": 53.46570260184152, \"min\": 53.46570260184152}}, \"EndTime\": 1596636221.047121, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.047105}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 59.30450875418527, \"sum\": 59.30450875418527, \"min\": 59.30450875418527}}, \"EndTime\": 1596636221.047182, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.047166}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 37.03291538783482, \"sum\": 37.03291538783482, \"min\": 37.03291538783482}}, \"EndTime\": 1596636221.047251, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.047234}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 36.33295767647879, \"sum\": 36.33295767647879, \"min\": 36.33295767647879}}, \"EndTime\": 1596636221.04732, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.047303}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 39.93784877232143, \"sum\": 39.93784877232143, \"min\": 39.93784877232143}}, \"EndTime\": 1596636221.047392, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.047374}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 36.599478585379465, \"sum\": 36.599478585379465, \"min\": 36.599478585379465}}, \"EndTime\": 1596636221.047458, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.047441}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 54.831290108816965, \"sum\": 54.831290108816965, \"min\": 54.831290108816965}}, \"EndTime\": 1596636221.047521, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.047503}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 50.676221575055806, \"sum\": 50.676221575055806, \"min\": 50.676221575055806}}, \"EndTime\": 1596636221.047582, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.047565}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 68.63015747070312, \"sum\": 68.63015747070312, \"min\": 68.63015747070312}}, \"EndTime\": 1596636221.047646, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.04763}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 59.53775896344866, \"sum\": 59.53775896344866, \"min\": 59.53775896344866}}, \"EndTime\": 1596636221.047705, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.04769}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 36.46618434361049, \"sum\": 36.46618434361049, \"min\": 36.46618434361049}}, \"EndTime\": 1596636221.047768, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.047752}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 32.407845633370535, \"sum\": 32.407845633370535, \"min\": 32.407845633370535}}, \"EndTime\": 1596636221.047821, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.047807}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 41.293831961495535, \"sum\": 41.293831961495535, \"min\": 41.293831961495535}}, \"EndTime\": 1596636221.047889, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.047871}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 38.786281040736604, \"sum\": 38.786281040736604, \"min\": 38.786281040736604}}, \"EndTime\": 1596636221.047951, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.047933}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 82.55046735491071, \"sum\": 82.55046735491071, \"min\": 82.55046735491071}}, \"EndTime\": 1596636221.048022, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.048004}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 78.74686104910714, \"sum\": 78.74686104910714, \"min\": 78.74686104910714}}, \"EndTime\": 1596636221.048091, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.048074}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 78.59713309151786, \"sum\": 78.59713309151786, \"min\": 78.59713309151786}}, \"EndTime\": 1596636221.048152, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.048136}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 78.50820486886161, \"sum\": 78.50820486886161, \"min\": 78.50820486886161}}, \"EndTime\": 1596636221.048208, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.048194}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 81.49991280691964, \"sum\": 81.49991280691964, \"min\": 81.49991280691964}}, \"EndTime\": 1596636221.048275, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.048259}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 84.9866943359375, \"sum\": 84.9866943359375, \"min\": 84.9866943359375}}, \"EndTime\": 1596636221.048332, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.048316}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 71.15733555385044, \"sum\": 71.15733555385044, \"min\": 71.15733555385044}}, \"EndTime\": 1596636221.048394, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.048382}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 80.67519705636161, \"sum\": 80.67519705636161, \"min\": 80.67519705636161}}, \"EndTime\": 1596636221.048439, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636221.048425}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #quality_metric: host=algo-1, epoch=9, validation mse_objective <loss>=54.5206952776\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #early_stopping_criteria_metric: host=algo-1, epoch=9, criteria=mse_objective, value=32.4078456334\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] Saving model for epoch: 9\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] Saved checkpoint to \"/tmp/tmpi8iNde/mx-mod-0000.params\"\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #progress_metric: host=algo-1, completed 66 % of epochs\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Total Batches Seen\": {\"count\": 1, \"max\": 23, \"sum\": 23.0, \"min\": 23}, \"Total Records Seen\": {\"count\": 1, \"max\": 2697, \"sum\": 2697.0, \"min\": 2697}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Reset Count\": {\"count\": 1, \"max\": 12, \"sum\": 12.0, \"min\": 12}}, \"EndTime\": 1596636221.054687, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 9}, \"StartTime\": 1596636220.971112}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #throughput_metric: host=algo-1, train throughput=2711.67486343 records/second\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5691322326660156, \"sum\": 0.5691322326660156, \"min\": 0.5691322326660156}}, \"EndTime\": 1596636221.085806, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.085693}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5461030197143555, \"sum\": 0.5461030197143555, \"min\": 0.5461030197143555}}, \"EndTime\": 1596636221.085934, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.085911}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7640380096435547, \"sum\": 0.7640380096435547, \"min\": 0.7640380096435547}}, \"EndTime\": 1596636221.086011, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.085992}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5371427917480469, \"sum\": 0.5371427917480469, \"min\": 0.5371427917480469}}, \"EndTime\": 1596636221.0861, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.086065}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.39638576507568357, \"sum\": 0.39638576507568357, \"min\": 0.39638576507568357}}, \"EndTime\": 1596636221.086176, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.086157}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3069977951049805, \"sum\": 0.3069977951049805, \"min\": 0.3069977951049805}}, \"EndTime\": 1596636221.086244, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.086226}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3282325744628906, \"sum\": 0.3282325744628906, \"min\": 0.3282325744628906}}, \"EndTime\": 1596636221.086307, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.086291}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3196380615234375, \"sum\": 0.3196380615234375, \"min\": 0.3196380615234375}}, \"EndTime\": 1596636221.086366, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.086349}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6099211883544922, \"sum\": 0.6099211883544922, \"min\": 0.6099211883544922}}, \"EndTime\": 1596636221.086424, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.086407}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.821375732421875, \"sum\": 0.821375732421875, \"min\": 0.821375732421875}}, \"EndTime\": 1596636221.086487, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.08647}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5716251754760742, \"sum\": 0.5716251754760742, \"min\": 0.5716251754760742}}, \"EndTime\": 1596636221.08655, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.086532}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6834833526611328, \"sum\": 0.6834833526611328, \"min\": 0.6834833526611328}}, \"EndTime\": 1596636221.086613, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.086596}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.35909461975097656, \"sum\": 0.35909461975097656, \"min\": 0.35909461975097656}}, \"EndTime\": 1596636221.086683, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.086665}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.35650398254394533, \"sum\": 0.35650398254394533, \"min\": 0.35650398254394533}}, \"EndTime\": 1596636221.086796, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.086744}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3778196334838867, \"sum\": 0.3778196334838867, \"min\": 0.3778196334838867}}, \"EndTime\": 1596636221.086883, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.086864}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3287311935424805, \"sum\": 0.3287311935424805, \"min\": 0.3287311935424805}}, \"EndTime\": 1596636221.086958, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.086938}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6263276672363282, \"sum\": 0.6263276672363282, \"min\": 0.6263276672363282}}, \"EndTime\": 1596636221.087019, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.087003}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5589892578125, \"sum\": 0.5589892578125, \"min\": 0.5589892578125}}, \"EndTime\": 1596636221.08708, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.087062}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7551587677001953, \"sum\": 0.7551587677001953, \"min\": 0.7551587677001953}}, \"EndTime\": 1596636221.08714, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.087124}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6382159423828125, \"sum\": 0.6382159423828125, \"min\": 0.6382159423828125}}, \"EndTime\": 1596636221.087212, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.087193}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.33561756134033205, \"sum\": 0.33561756134033205, \"min\": 0.33561756134033205}}, \"EndTime\": 1596636221.087279, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.087262}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.2862898063659668, \"sum\": 0.2862898063659668, \"min\": 0.2862898063659668}}, \"EndTime\": 1596636221.08735, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.087329}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.36429779052734373, \"sum\": 0.36429779052734373, \"min\": 0.36429779052734373}}, \"EndTime\": 1596636221.08742, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.087398}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3445055389404297, \"sum\": 0.3445055389404297, \"min\": 0.3445055389404297}}, \"EndTime\": 1596636221.087494, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.087476}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9913970184326172, \"sum\": 0.9913970184326172, \"min\": 0.9913970184326172}}, \"EndTime\": 1596636221.087559, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.087542}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9547843933105469, \"sum\": 0.9547843933105469, \"min\": 0.9547843933105469}}, \"EndTime\": 1596636221.08762, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.087604}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9354166412353515, \"sum\": 0.9354166412353515, \"min\": 0.9354166412353515}}, \"EndTime\": 1596636221.087682, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.087665}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9471031951904297, \"sum\": 0.9471031951904297, \"min\": 0.9471031951904297}}, \"EndTime\": 1596636221.087742, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.087726}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9453947448730469, \"sum\": 0.9453947448730469, \"min\": 0.9453947448730469}}, \"EndTime\": 1596636221.0878, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.087785}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0160385131835938, \"sum\": 1.0160385131835938, \"min\": 1.0160385131835938}}, \"EndTime\": 1596636221.087855, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.08784}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8668126678466797, \"sum\": 0.8668126678466797, \"min\": 0.8668126678466797}}, \"EndTime\": 1596636221.087916, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.087898}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9461774444580078, \"sum\": 0.9461774444580078, \"min\": 0.9461774444580078}}, \"EndTime\": 1596636221.087988, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.087969}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #quality_metric: host=algo-1, epoch=10, train mse_objective <loss>=0.569132232666\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 53.199497767857146, \"sum\": 53.199497767857146, \"min\": 53.199497767857146}}, \"EndTime\": 1596636221.150849, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.150749}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 50.971056256975444, \"sum\": 50.971056256975444, \"min\": 50.971056256975444}}, \"EndTime\": 1596636221.150969, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.150949}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 62.88442121233259, \"sum\": 62.88442121233259, \"min\": 62.88442121233259}}, \"EndTime\": 1596636221.151039, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.151022}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 51.740836007254465, \"sum\": 51.740836007254465, \"min\": 51.740836007254465}}, \"EndTime\": 1596636221.151101, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.151085}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 36.096343994140625, \"sum\": 36.096343994140625, \"min\": 36.096343994140625}}, \"EndTime\": 1596636221.151159, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.151142}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 33.32743181501116, \"sum\": 33.32743181501116, \"min\": 33.32743181501116}}, \"EndTime\": 1596636221.151224, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.151207}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 35.62382943289621, \"sum\": 35.62382943289621, \"min\": 35.62382943289621}}, \"EndTime\": 1596636221.151296, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.151278}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 36.00898306710379, \"sum\": 36.00898306710379, \"min\": 36.00898306710379}}, \"EndTime\": 1596636221.151359, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.151342}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 53.72407749720982, \"sum\": 53.72407749720982, \"min\": 53.72407749720982}}, \"EndTime\": 1596636221.151419, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.151403}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 62.92476545061384, \"sum\": 62.92476545061384, \"min\": 62.92476545061384}}, \"EndTime\": 1596636221.151472, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.151457}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 52.652186802455354, \"sum\": 52.652186802455354, \"min\": 52.652186802455354}}, \"EndTime\": 1596636221.151527, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.151512}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 57.854993547712056, \"sum\": 57.854993547712056, \"min\": 57.854993547712056}}, \"EndTime\": 1596636221.15158, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.151565}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 37.437630789620535, \"sum\": 37.437630789620535, \"min\": 37.437630789620535}}, \"EndTime\": 1596636221.151637, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.151621}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 38.07505580357143, \"sum\": 38.07505580357143, \"min\": 38.07505580357143}}, \"EndTime\": 1596636221.151736, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.151718}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 35.94669233049665, \"sum\": 35.94669233049665, \"min\": 35.94669233049665}}, \"EndTime\": 1596636221.151795, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.151779}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 36.622462681361604, \"sum\": 36.622462681361604, \"min\": 36.622462681361604}}, \"EndTime\": 1596636221.151864, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.151846}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 53.389940534319194, \"sum\": 53.389940534319194, \"min\": 53.389940534319194}}, \"EndTime\": 1596636221.151926, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.151908}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 49.640127999441965, \"sum\": 49.640127999441965, \"min\": 49.640127999441965}}, \"EndTime\": 1596636221.151992, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.151975}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 66.79909842354911, \"sum\": 66.79909842354911, \"min\": 66.79909842354911}}, \"EndTime\": 1596636221.152054, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.152038}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 58.09235055106027, \"sum\": 58.09235055106027, \"min\": 58.09235055106027}}, \"EndTime\": 1596636221.152111, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.152096}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 31.714106968470983, \"sum\": 31.714106968470983, \"min\": 31.714106968470983}}, \"EndTime\": 1596636221.15217, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.152153}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 32.34466116768973, \"sum\": 32.34466116768973, \"min\": 32.34466116768973}}, \"EndTime\": 1596636221.152229, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.152213}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 34.17328970772879, \"sum\": 34.17328970772879, \"min\": 34.17328970772879}}, \"EndTime\": 1596636221.152293, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.152275}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 34.95051574707031, \"sum\": 34.95051574707031, \"min\": 34.95051574707031}}, \"EndTime\": 1596636221.152357, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.15234}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 82.47451346261161, \"sum\": 82.47451346261161, \"min\": 82.47451346261161}}, \"EndTime\": 1596636221.152418, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.152402}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 78.53404889787946, \"sum\": 78.53404889787946, \"min\": 78.53404889787946}}, \"EndTime\": 1596636221.152476, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.152461}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 77.37645612444196, \"sum\": 77.37645612444196, \"min\": 77.37645612444196}}, \"EndTime\": 1596636221.152532, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.152516}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 78.00522286551339, \"sum\": 78.00522286551339, \"min\": 78.00522286551339}}, \"EndTime\": 1596636221.152591, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.152575}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 76.62211390904018, \"sum\": 76.62211390904018, \"min\": 76.62211390904018}}, \"EndTime\": 1596636221.152651, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.152635}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 73.3798828125, \"sum\": 73.3798828125, \"min\": 73.3798828125}}, \"EndTime\": 1596636221.152708, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.152693}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 70.95948137555804, \"sum\": 70.95948137555804, \"min\": 70.95948137555804}}, \"EndTime\": 1596636221.152763, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.152748}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 80.20258440290179, \"sum\": 80.20258440290179, \"min\": 80.20258440290179}}, \"EndTime\": 1596636221.152835, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.152816}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #quality_metric: host=algo-1, epoch=10, validation mse_objective <loss>=53.1994977679\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #early_stopping_criteria_metric: host=algo-1, epoch=10, criteria=mse_objective, value=31.7141069685\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] Epoch 10: Loss has not improved for 0 epochs.\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] Saving model for epoch: 10\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] Saved checkpoint to \"/tmp/tmpgNuqF2/mx-mod-0000.params\"\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #progress_metric: host=algo-1, completed 73 % of epochs\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Total Batches Seen\": {\"count\": 1, \"max\": 25, \"sum\": 25.0, \"min\": 25}, \"Total Records Seen\": {\"count\": 1, \"max\": 2924, \"sum\": 2924.0, \"min\": 2924}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Reset Count\": {\"count\": 1, \"max\": 13, \"sum\": 13.0, \"min\": 13}}, \"EndTime\": 1596636221.160643, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 10}, \"StartTime\": 1596636221.054991}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #throughput_metric: host=algo-1, train throughput=2145.66095156 records/second\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5497225189208984, \"sum\": 0.5497225189208984, \"min\": 0.5497225189208984}}, \"EndTime\": 1596636221.197972, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.197896}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5311727905273438, \"sum\": 0.5311727905273438, \"min\": 0.5311727905273438}}, \"EndTime\": 1596636221.198083, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.198037}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7262129211425781, \"sum\": 0.7262129211425781, \"min\": 0.7262129211425781}}, \"EndTime\": 1596636221.198167, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.198146}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5232398223876953, \"sum\": 0.5232398223876953, \"min\": 0.5232398223876953}}, \"EndTime\": 1596636221.198283, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.19823}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3468292617797852, \"sum\": 0.3468292617797852, \"min\": 0.3468292617797852}}, \"EndTime\": 1596636221.198357, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.198338}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3088748168945312, \"sum\": 0.3088748168945312, \"min\": 0.3088748168945312}}, \"EndTime\": 1596636221.198447, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.198402}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3394091033935547, \"sum\": 0.3394091033935547, \"min\": 0.3394091033935547}}, \"EndTime\": 1596636221.198542, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.198522}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3380941390991211, \"sum\": 0.3380941390991211, \"min\": 0.3380941390991211}}, \"EndTime\": 1596636221.198607, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.198588}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.59541748046875, \"sum\": 0.59541748046875, \"min\": 0.59541748046875}}, \"EndTime\": 1596636221.198699, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.19868}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.77958740234375, \"sum\": 0.77958740234375, \"min\": 0.77958740234375}}, \"EndTime\": 1596636221.198761, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.198745}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5584693145751953, \"sum\": 0.5584693145751953, \"min\": 0.5584693145751953}}, \"EndTime\": 1596636221.198822, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.198805}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6578073883056641, \"sum\": 0.6578073883056641, \"min\": 0.6578073883056641}}, \"EndTime\": 1596636221.198887, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.198869}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.36284900665283204, \"sum\": 0.36284900665283204, \"min\": 0.36284900665283204}}, \"EndTime\": 1596636221.198951, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.198933}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3691220474243164, \"sum\": 0.3691220474243164, \"min\": 0.3691220474243164}}, \"EndTime\": 1596636221.199016, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.198998}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3374803924560547, \"sum\": 0.3374803924560547, \"min\": 0.3374803924560547}}, \"EndTime\": 1596636221.199078, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.19906}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.32719806671142576, \"sum\": 0.32719806671142576, \"min\": 0.32719806671142576}}, \"EndTime\": 1596636221.199144, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.199126}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6045210647583008, \"sum\": 0.6045210647583008, \"min\": 0.6045210647583008}}, \"EndTime\": 1596636221.199206, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.199188}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5421652603149414, \"sum\": 0.5421652603149414, \"min\": 0.5421652603149414}}, \"EndTime\": 1596636221.199267, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.19925}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.728388671875, \"sum\": 0.728388671875, \"min\": 0.728388671875}}, \"EndTime\": 1596636221.199333, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.199316}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6171235656738281, \"sum\": 0.6171235656738281, \"min\": 0.6171235656738281}}, \"EndTime\": 1596636221.19939, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.199373}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3079167747497559, \"sum\": 0.3079167747497559, \"min\": 0.3079167747497559}}, \"EndTime\": 1596636221.199459, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.19944}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.2894319725036621, \"sum\": 0.2894319725036621, \"min\": 0.2894319725036621}}, \"EndTime\": 1596636221.199525, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.199507}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3174843215942383, \"sum\": 0.3174843215942383, \"min\": 0.3174843215942383}}, \"EndTime\": 1596636221.199586, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.199569}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3296574783325195, \"sum\": 0.3296574783325195, \"min\": 0.3296574783325195}}, \"EndTime\": 1596636221.19965, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.199633}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9941820526123046, \"sum\": 0.9941820526123046, \"min\": 0.9941820526123046}}, \"EndTime\": 1596636221.199713, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.199695}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9533267974853515, \"sum\": 0.9533267974853515, \"min\": 0.9533267974853515}}, \"EndTime\": 1596636221.199777, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.19976}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9258057403564454, \"sum\": 0.9258057403564454, \"min\": 0.9258057403564454}}, \"EndTime\": 1596636221.19984, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.199823}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9383955383300782, \"sum\": 0.9383955383300782, \"min\": 0.9383955383300782}}, \"EndTime\": 1596636221.199906, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.199888}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9197686004638672, \"sum\": 0.9197686004638672, \"min\": 0.9197686004638672}}, \"EndTime\": 1596636221.199961, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.199945}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8976906585693359, \"sum\": 0.8976906585693359, \"min\": 0.8976906585693359}}, \"EndTime\": 1596636221.20003, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.200012}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8681898498535157, \"sum\": 0.8681898498535157, \"min\": 0.8681898498535157}}, \"EndTime\": 1596636221.200083, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.200066}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9657180023193359, \"sum\": 0.9657180023193359, \"min\": 0.9657180023193359}}, \"EndTime\": 1596636221.200133, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.200118}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #quality_metric: host=algo-1, epoch=11, train mse_objective <loss>=0.549722518921\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 52.126329694475444, \"sum\": 52.126329694475444, \"min\": 52.126329694475444}}, \"EndTime\": 1596636221.262516, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.262427}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 50.338963099888396, \"sum\": 50.338963099888396, \"min\": 50.338963099888396}}, \"EndTime\": 1596636221.262622, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.262602}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 60.557364327566965, \"sum\": 60.557364327566965, \"min\": 60.557364327566965}}, \"EndTime\": 1596636221.26269, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.262672}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 51.057486397879465, \"sum\": 51.057486397879465, \"min\": 51.057486397879465}}, \"EndTime\": 1596636221.262757, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.262741}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 36.31921604701451, \"sum\": 36.31921604701451, \"min\": 36.31921604701451}}, \"EndTime\": 1596636221.262822, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.262805}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 41.764818464006694, \"sum\": 41.764818464006694, \"min\": 41.764818464006694}}, \"EndTime\": 1596636221.262886, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.26287}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 43.495509556361604, \"sum\": 43.495509556361604, \"min\": 43.495509556361604}}, \"EndTime\": 1596636221.262949, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.262933}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 44.628487723214285, \"sum\": 44.628487723214285, \"min\": 44.628487723214285}}, \"EndTime\": 1596636221.262989, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.26298}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 53.235316685267854, \"sum\": 53.235316685267854, \"min\": 53.235316685267854}}, \"EndTime\": 1596636221.26302, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.263012}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 60.38863263811384, \"sum\": 60.38863263811384, \"min\": 60.38863263811384}}, \"EndTime\": 1596636221.263057, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.263043}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 51.974879673549104, \"sum\": 51.974879673549104, \"min\": 51.974879673549104}}, \"EndTime\": 1596636221.263124, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.263107}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 56.65158952985491, \"sum\": 56.65158952985491, \"min\": 56.65158952985491}}, \"EndTime\": 1596636221.263191, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.263174}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 41.627964564732146, \"sum\": 41.627964564732146, \"min\": 41.627964564732146}}, \"EndTime\": 1596636221.263256, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.263239}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 41.761117117745535, \"sum\": 41.761117117745535, \"min\": 41.761117117745535}}, \"EndTime\": 1596636221.26332, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.263303}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 36.80903843470982, \"sum\": 36.80903843470982, \"min\": 36.80903843470982}}, \"EndTime\": 1596636221.263381, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.263366}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 43.86797659737723, \"sum\": 43.86797659737723, \"min\": 43.86797659737723}}, \"EndTime\": 1596636221.263445, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.263428}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 52.211805071149556, \"sum\": 52.211805071149556, \"min\": 52.211805071149556}}, \"EndTime\": 1596636221.263506, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.263489}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 48.722298758370535, \"sum\": 48.722298758370535, \"min\": 48.722298758370535}}, \"EndTime\": 1596636221.263568, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.263552}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 65.23104422433036, \"sum\": 65.23104422433036, \"min\": 65.23104422433036}}, \"EndTime\": 1596636221.263631, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.263614}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 56.845833914620535, \"sum\": 56.845833914620535, \"min\": 56.845833914620535}}, \"EndTime\": 1596636221.263695, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.263678}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 37.814178466796875, \"sum\": 37.814178466796875, \"min\": 37.814178466796875}}, \"EndTime\": 1596636221.263762, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.263743}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 38.370370047433035, \"sum\": 38.370370047433035, \"min\": 38.370370047433035}}, \"EndTime\": 1596636221.263834, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.263815}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 37.665130615234375, \"sum\": 37.665130615234375, \"min\": 37.665130615234375}}, \"EndTime\": 1596636221.263907, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.263887}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 37.75992693219866, \"sum\": 37.75992693219866, \"min\": 37.75992693219866}}, \"EndTime\": 1596636221.26398, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.263961}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 82.27117919921875, \"sum\": 82.27117919921875, \"min\": 82.27117919921875}}, \"EndTime\": 1596636221.264051, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.264033}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 78.44383021763393, \"sum\": 78.44383021763393, \"min\": 78.44383021763393}}, \"EndTime\": 1596636221.264097, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.264081}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 76.36092703683036, \"sum\": 76.36092703683036, \"min\": 76.36092703683036}}, \"EndTime\": 1596636221.264157, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.264141}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 77.7459716796875, \"sum\": 77.7459716796875, \"min\": 77.7459716796875}}, \"EndTime\": 1596636221.264223, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.264206}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 79.0518798828125, \"sum\": 79.0518798828125, \"min\": 79.0518798828125}}, \"EndTime\": 1596636221.264283, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.264266}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 71.99891444614956, \"sum\": 71.99891444614956, \"min\": 71.99891444614956}}, \"EndTime\": 1596636221.264348, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.264331}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 79.04213169642857, \"sum\": 79.04213169642857, \"min\": 79.04213169642857}}, \"EndTime\": 1596636221.264412, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.264395}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 84.73705182756696, \"sum\": 84.73705182756696, \"min\": 84.73705182756696}}, \"EndTime\": 1596636221.264473, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.264456}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #quality_metric: host=algo-1, epoch=11, validation mse_objective <loss>=52.1263296945\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #early_stopping_criteria_metric: host=algo-1, epoch=11, criteria=mse_objective, value=36.319216047\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] Saving model for epoch: 11\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] Saved checkpoint to \"/tmp/tmpLR8YPB/mx-mod-0000.params\"\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #progress_metric: host=algo-1, completed 80 % of epochs\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Total Batches Seen\": {\"count\": 1, \"max\": 27, \"sum\": 27.0, \"min\": 27}, \"Total Records Seen\": {\"count\": 1, \"max\": 3151, \"sum\": 3151.0, \"min\": 3151}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Reset Count\": {\"count\": 1, \"max\": 14, \"sum\": 14.0, \"min\": 14}}, \"EndTime\": 1596636221.272363, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 11}, \"StartTime\": 1596636221.160926}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #throughput_metric: host=algo-1, train throughput=2034.39494877 records/second\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5333014678955078, \"sum\": 0.5333014678955078, \"min\": 0.5333014678955078}}, \"EndTime\": 1596636221.310865, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.310748}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5185421371459961, \"sum\": 0.5185421371459961, \"min\": 0.5185421371459961}}, \"EndTime\": 1596636221.311001, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.310977}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6921291351318359, \"sum\": 0.6921291351318359, \"min\": 0.6921291351318359}}, \"EndTime\": 1596636221.311088, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.311068}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.511768798828125, \"sum\": 0.511768798828125, \"min\": 0.511768798828125}}, \"EndTime\": 1596636221.311157, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.311141}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3501822280883789, \"sum\": 0.3501822280883789, \"min\": 0.3501822280883789}}, \"EndTime\": 1596636221.311218, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.311203}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.398499641418457, \"sum\": 0.398499641418457, \"min\": 0.398499641418457}}, \"EndTime\": 1596636221.311285, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.311268}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.422766227722168, \"sum\": 0.422766227722168, \"min\": 0.422766227722168}}, \"EndTime\": 1596636221.311344, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.311332}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.42366249084472657, \"sum\": 0.42366249084472657, \"min\": 0.42366249084472657}}, \"EndTime\": 1596636221.311378, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.31137}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5830637741088868, \"sum\": 0.5830637741088868, \"min\": 0.5830637741088868}}, \"EndTime\": 1596636221.311426, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.311411}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7416510772705078, \"sum\": 0.7416510772705078, \"min\": 0.7416510772705078}}, \"EndTime\": 1596636221.311491, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.311474}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.547120590209961, \"sum\": 0.547120590209961, \"min\": 0.547120590209961}}, \"EndTime\": 1596636221.311541, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.311525}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6352379989624023, \"sum\": 0.6352379989624023, \"min\": 0.6352379989624023}}, \"EndTime\": 1596636221.311601, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.311585}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4034333801269531, \"sum\": 0.4034333801269531, \"min\": 0.4034333801269531}}, \"EndTime\": 1596636221.311652, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.311637}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.40385353088378906, \"sum\": 0.40385353088378906, \"min\": 0.40385353088378906}}, \"EndTime\": 1596636221.311707, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.311691}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3498813247680664, \"sum\": 0.3498813247680664, \"min\": 0.3498813247680664}}, \"EndTime\": 1596636221.311766, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.311751}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.40329761505126954, \"sum\": 0.40329761505126954, \"min\": 0.40329761505126954}}, \"EndTime\": 1596636221.311817, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.311803}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.585829963684082, \"sum\": 0.585829963684082, \"min\": 0.585829963684082}}, \"EndTime\": 1596636221.311872, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.311858}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5270445251464844, \"sum\": 0.5270445251464844, \"min\": 0.5270445251464844}}, \"EndTime\": 1596636221.311926, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.311912}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7047769165039063, \"sum\": 0.7047769165039063, \"min\": 0.7047769165039063}}, \"EndTime\": 1596636221.311977, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.311964}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5986948776245117, \"sum\": 0.5986948776245117, \"min\": 0.5986948776245117}}, \"EndTime\": 1596636221.312032, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.312017}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4011281967163086, \"sum\": 0.4011281967163086, \"min\": 0.4011281967163086}}, \"EndTime\": 1596636221.312102, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.312084}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3610478210449219, \"sum\": 0.3610478210449219, \"min\": 0.3610478210449219}}, \"EndTime\": 1596636221.312172, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.312154}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.39124332427978514, \"sum\": 0.39124332427978514, \"min\": 0.39124332427978514}}, \"EndTime\": 1596636221.312242, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.312223}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3900424575805664, \"sum\": 0.3900424575805664, \"min\": 0.3900424575805664}}, \"EndTime\": 1596636221.312312, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.312292}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9950258636474609, \"sum\": 0.9950258636474609, \"min\": 0.9950258636474609}}, \"EndTime\": 1596636221.312381, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.312362}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.952599105834961, \"sum\": 0.952599105834961, \"min\": 0.952599105834961}}, \"EndTime\": 1596636221.312443, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.312426}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9179418182373047, \"sum\": 0.9179418182373047, \"min\": 0.9179418182373047}}, \"EndTime\": 1596636221.312504, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.312487}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9333750152587891, \"sum\": 0.9333750152587891, \"min\": 0.9333750152587891}}, \"EndTime\": 1596636221.312563, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.312547}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9750807189941406, \"sum\": 0.9750807189941406, \"min\": 0.9750807189941406}}, \"EndTime\": 1596636221.312621, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.312605}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8842940521240235, \"sum\": 0.8842940521240235, \"min\": 0.8842940521240235}}, \"EndTime\": 1596636221.312683, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.312667}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9605865478515625, \"sum\": 0.9605865478515625, \"min\": 0.9605865478515625}}, \"EndTime\": 1596636221.312737, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.312721}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0418457794189453, \"sum\": 1.0418457794189453, \"min\": 1.0418457794189453}}, \"EndTime\": 1596636221.312792, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.312775}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #quality_metric: host=algo-1, epoch=12, train mse_objective <loss>=0.533301467896\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 51.296373639787944, \"sum\": 51.296373639787944, \"min\": 51.296373639787944}}, \"EndTime\": 1596636221.418311, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.418216}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 49.84484427315848, \"sum\": 49.84484427315848, \"min\": 49.84484427315848}}, \"EndTime\": 1596636221.418411, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.418397}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 58.589512416294646, \"sum\": 58.589512416294646, \"min\": 58.589512416294646}}, \"EndTime\": 1596636221.418472, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.418454}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 50.543914794921875, \"sum\": 50.543914794921875, \"min\": 50.543914794921875}}, \"EndTime\": 1596636221.418534, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.418518}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 36.34496852329799, \"sum\": 36.34496852329799, \"min\": 36.34496852329799}}, \"EndTime\": 1596636221.418599, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.418581}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 44.140834263392854, \"sum\": 44.140834263392854, \"min\": 44.140834263392854}}, \"EndTime\": 1596636221.418662, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.418644}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 45.01616995675223, \"sum\": 45.01616995675223, \"min\": 45.01616995675223}}, \"EndTime\": 1596636221.418722, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.418705}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 45.81758771623884, \"sum\": 45.81758771623884, \"min\": 45.81758771623884}}, \"EndTime\": 1596636221.418778, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.418763}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 52.89259992327009, \"sum\": 52.89259992327009, \"min\": 52.89259992327009}}, \"EndTime\": 1596636221.418841, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.418823}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 58.20938982282366, \"sum\": 58.20938982282366, \"min\": 58.20938982282366}}, \"EndTime\": 1596636221.418901, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.418884}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 51.427049909319194, \"sum\": 51.427049909319194, \"min\": 51.427049909319194}}, \"EndTime\": 1596636221.418964, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.418948}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 55.69675990513393, \"sum\": 55.69675990513393, \"min\": 55.69675990513393}}, \"EndTime\": 1596636221.419027, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.419009}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 40.750217982700896, \"sum\": 40.750217982700896, \"min\": 40.750217982700896}}, \"EndTime\": 1596636221.419089, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.419072}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 39.380336216517854, \"sum\": 39.380336216517854, \"min\": 39.380336216517854}}, \"EndTime\": 1596636221.41915, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.419134}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 36.77100045340402, \"sum\": 36.77100045340402, \"min\": 36.77100045340402}}, \"EndTime\": 1596636221.419207, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.419192}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 45.57053920200893, \"sum\": 45.57053920200893, \"min\": 45.57053920200893}}, \"EndTime\": 1596636221.419262, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.419247}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 51.28269304547991, \"sum\": 51.28269304547991, \"min\": 51.28269304547991}}, \"EndTime\": 1596636221.419316, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.419302}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 47.938302176339285, \"sum\": 47.938302176339285, \"min\": 47.938302176339285}}, \"EndTime\": 1596636221.419378, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.41936}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 63.925789969308035, \"sum\": 63.925789969308035, \"min\": 63.925789969308035}}, \"EndTime\": 1596636221.41944, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.419423}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 55.779776436941965, \"sum\": 55.779776436941965, \"min\": 55.779776436941965}}, \"EndTime\": 1596636221.419501, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.419486}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 45.604566301618306, \"sum\": 45.604566301618306, \"min\": 45.604566301618306}}, \"EndTime\": 1596636221.419565, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.419546}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 39.49041312081473, \"sum\": 39.49041312081473, \"min\": 39.49041312081473}}, \"EndTime\": 1596636221.419626, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.419609}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 45.044141496930806, \"sum\": 45.044141496930806, \"min\": 45.044141496930806}}, \"EndTime\": 1596636221.419687, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.419671}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 41.02789306640625, \"sum\": 41.02789306640625, \"min\": 41.02789306640625}}, \"EndTime\": 1596636221.419793, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.419774}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 81.99435860770089, \"sum\": 81.99435860770089, \"min\": 81.99435860770089}}, \"EndTime\": 1596636221.419855, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.419838}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 78.48190743582589, \"sum\": 78.48190743582589, \"min\": 78.48190743582589}}, \"EndTime\": 1596636221.41991, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.419896}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 75.55623953683036, \"sum\": 75.55623953683036, \"min\": 75.55623953683036}}, \"EndTime\": 1596636221.419965, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.41995}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 77.66504778180804, \"sum\": 77.66504778180804, \"min\": 77.66504778180804}}, \"EndTime\": 1596636221.420028, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.420011}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 80.66505650111607, \"sum\": 80.66505650111607, \"min\": 80.66505650111607}}, \"EndTime\": 1596636221.420091, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.420074}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 72.78848266601562, \"sum\": 72.78848266601562, \"min\": 72.78848266601562}}, \"EndTime\": 1596636221.420152, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.420137}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 85.41846575055804, \"sum\": 85.41846575055804, \"min\": 85.41846575055804}}, \"EndTime\": 1596636221.420215, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.420197}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 85.19035121372768, \"sum\": 85.19035121372768, \"min\": 85.19035121372768}}, \"EndTime\": 1596636221.420278, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.420262}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #quality_metric: host=algo-1, epoch=12, validation mse_objective <loss>=51.2963736398\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #early_stopping_criteria_metric: host=algo-1, epoch=12, criteria=mse_objective, value=36.3449685233\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] Saving model for epoch: 12\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] Saved checkpoint to \"/tmp/tmp9VcwDl/mx-mod-0000.params\"\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #progress_metric: host=algo-1, completed 86 % of epochs\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Total Batches Seen\": {\"count\": 1, \"max\": 29, \"sum\": 29.0, \"min\": 29}, \"Total Records Seen\": {\"count\": 1, \"max\": 3378, \"sum\": 3378.0, \"min\": 3378}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Reset Count\": {\"count\": 1, \"max\": 15, \"sum\": 15.0, \"min\": 15}}, \"EndTime\": 1596636221.434491, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 12}, \"StartTime\": 1596636221.272659}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #throughput_metric: host=algo-1, train throughput=1401.34026075 records/second\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5193297958374024, \"sum\": 0.5193297958374024, \"min\": 0.5193297958374024}}, \"EndTime\": 1596636221.471769, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.471645}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5076401138305664, \"sum\": 0.5076401138305664, \"min\": 0.5076401138305664}}, \"EndTime\": 1596636221.471949, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.471925}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6614204406738281, \"sum\": 0.6614204406738281, \"min\": 0.6614204406738281}}, \"EndTime\": 1596636221.472024, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.472006}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.502095947265625, \"sum\": 0.502095947265625, \"min\": 0.502095947265625}}, \"EndTime\": 1596636221.472084, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.472068}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.35719646453857423, \"sum\": 0.35719646453857423, \"min\": 0.35719646453857423}}, \"EndTime\": 1596636221.472148, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.472131}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.42088966369628905, \"sum\": 0.42088966369628905, \"min\": 0.42088966369628905}}, \"EndTime\": 1596636221.472208, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.472192}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4357235336303711, \"sum\": 0.4357235336303711, \"min\": 0.4357235336303711}}, \"EndTime\": 1596636221.472269, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.472253}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4324298095703125, \"sum\": 0.4324298095703125, \"min\": 0.4324298095703125}}, \"EndTime\": 1596636221.472328, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.472311}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5722632217407226, \"sum\": 0.5722632217407226, \"min\": 0.5722632217407226}}, \"EndTime\": 1596636221.472388, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.47237}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.7072139739990234, \"sum\": 0.7072139739990234, \"min\": 0.7072139739990234}}, \"EndTime\": 1596636221.47245, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.472433}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5370499801635742, \"sum\": 0.5370499801635742, \"min\": 0.5370499801635742}}, \"EndTime\": 1596636221.472515, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.472497}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6152399444580078, \"sum\": 0.6152399444580078, \"min\": 0.6152399444580078}}, \"EndTime\": 1596636221.472578, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.472561}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.39066131591796877, \"sum\": 0.39066131591796877, \"min\": 0.39066131591796877}}, \"EndTime\": 1596636221.472641, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.472624}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.37794353485107424, \"sum\": 0.37794353485107424, \"min\": 0.37794353485107424}}, \"EndTime\": 1596636221.472704, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.472687}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3567653656005859, \"sum\": 0.3567653656005859, \"min\": 0.3567653656005859}}, \"EndTime\": 1596636221.472764, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.472748}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4209107208251953, \"sum\": 0.4209107208251953, \"min\": 0.4209107208251953}}, \"EndTime\": 1596636221.472823, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.472806}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5696995162963867, \"sum\": 0.5696995162963867, \"min\": 0.5696995162963867}}, \"EndTime\": 1596636221.472883, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.472866}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5132023239135742, \"sum\": 0.5132023239135742, \"min\": 0.5132023239135742}}, \"EndTime\": 1596636221.47294, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.472925}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6837903594970703, \"sum\": 0.6837903594970703, \"min\": 0.6837903594970703}}, \"EndTime\": 1596636221.472997, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.47298}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5823414611816407, \"sum\": 0.5823414611816407, \"min\": 0.5823414611816407}}, \"EndTime\": 1596636221.473059, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.473042}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5049056625366211, \"sum\": 0.5049056625366211, \"min\": 0.5049056625366211}}, \"EndTime\": 1596636221.473123, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.473106}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.37999069213867187, \"sum\": 0.37999069213867187, \"min\": 0.37999069213867187}}, \"EndTime\": 1596636221.473185, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.473168}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.49861602783203124, \"sum\": 0.49861602783203124, \"min\": 0.49861602783203124}}, \"EndTime\": 1596636221.473247, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.47323}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4479901123046875, \"sum\": 0.4479901123046875, \"min\": 0.4479901123046875}}, \"EndTime\": 1596636221.473308, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.473292}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9940711975097656, \"sum\": 0.9940711975097656, \"min\": 0.9940711975097656}}, \"EndTime\": 1596636221.473368, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.473352}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9524091339111328, \"sum\": 0.9524091339111328, \"min\": 0.9524091339111328}}, \"EndTime\": 1596636221.473429, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.473412}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9114677429199218, \"sum\": 0.9114677429199218, \"min\": 0.9114677429199218}}, \"EndTime\": 1596636221.473487, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.473472}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9307648468017579, \"sum\": 0.9307648468017579, \"min\": 0.9307648468017579}}, \"EndTime\": 1596636221.473547, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.473532}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0108262634277343, \"sum\": 1.0108262634277343, \"min\": 1.0108262634277343}}, \"EndTime\": 1596636221.473604, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.473588}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.8897056579589844, \"sum\": 0.8897056579589844, \"min\": 0.8897056579589844}}, \"EndTime\": 1596636221.473688, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.473646}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.031774368286133, \"sum\": 1.031774368286133, \"min\": 1.031774368286133}}, \"EndTime\": 1596636221.473748, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.473732}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0603325653076172, \"sum\": 1.0603325653076172, \"min\": 1.0603325653076172}}, \"EndTime\": 1596636221.473811, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.473793}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #quality_metric: host=algo-1, epoch=13, train mse_objective <loss>=0.519329795837\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 50.67438180106027, \"sum\": 50.67438180106027, \"min\": 50.67438180106027}}, \"EndTime\": 1596636221.529927, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.529844}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 49.451607840401785, \"sum\": 49.451607840401785, \"min\": 49.451607840401785}}, \"EndTime\": 1596636221.530032, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.530013}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 56.95966012137277, \"sum\": 56.95966012137277, \"min\": 56.95966012137277}}, \"EndTime\": 1596636221.530099, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.530081}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 50.158900669642854, \"sum\": 50.158900669642854, \"min\": 50.158900669642854}}, \"EndTime\": 1596636221.530157, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.530142}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 32.6273433140346, \"sum\": 32.6273433140346, \"min\": 32.6273433140346}}, \"EndTime\": 1596636221.530211, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.530197}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 36.78082275390625, \"sum\": 36.78082275390625, \"min\": 36.78082275390625}}, \"EndTime\": 1596636221.530265, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.530249}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 37.604461669921875, \"sum\": 37.604461669921875, \"min\": 37.604461669921875}}, \"EndTime\": 1596636221.530328, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.530311}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 37.55238996233259, \"sum\": 37.55238996233259, \"min\": 37.55238996233259}}, \"EndTime\": 1596636221.530387, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.530372}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 52.655308314732146, \"sum\": 52.655308314732146, \"min\": 52.655308314732146}}, \"EndTime\": 1596636221.530442, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.530427}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 56.366001674107146, \"sum\": 56.366001674107146, \"min\": 56.366001674107146}}, \"EndTime\": 1596636221.530496, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.530481}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 50.976593017578125, \"sum\": 50.976593017578125, \"min\": 50.976593017578125}}, \"EndTime\": 1596636221.530551, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.530537}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 54.955718994140625, \"sum\": 54.955718994140625, \"min\": 54.955718994140625}}, \"EndTime\": 1596636221.530616, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.530598}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 33.741285051618306, \"sum\": 33.741285051618306, \"min\": 33.741285051618306}}, \"EndTime\": 1596636221.53068, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.530662}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 32.29398236955915, \"sum\": 32.29398236955915, \"min\": 32.29398236955915}}, \"EndTime\": 1596636221.530743, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.530727}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 33.09012930733817, \"sum\": 33.09012930733817, \"min\": 33.09012930733817}}, \"EndTime\": 1596636221.530805, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.530788}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 37.821742466517854, \"sum\": 37.821742466517854, \"min\": 37.821742466517854}}, \"EndTime\": 1596636221.530866, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.53085}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 50.56394740513393, \"sum\": 50.56394740513393, \"min\": 50.56394740513393}}, \"EndTime\": 1596636221.530929, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.530912}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 47.267490931919646, \"sum\": 47.267490931919646, \"min\": 47.267490931919646}}, \"EndTime\": 1596636221.530989, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.530972}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 62.84667532784598, \"sum\": 62.84667532784598, \"min\": 62.84667532784598}}, \"EndTime\": 1596636221.531049, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.531033}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 54.857639857700896, \"sum\": 54.857639857700896, \"min\": 54.857639857700896}}, \"EndTime\": 1596636221.531105, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.53109}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 45.56310599190848, \"sum\": 45.56310599190848, \"min\": 45.56310599190848}}, \"EndTime\": 1596636221.531161, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.531146}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 35.44328744070871, \"sum\": 35.44328744070871, \"min\": 35.44328744070871}}, \"EndTime\": 1596636221.531219, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.531203}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 45.45615495954241, \"sum\": 45.45615495954241, \"min\": 45.45615495954241}}, \"EndTime\": 1596636221.531272, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.531258}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 40.96028355189732, \"sum\": 40.96028355189732, \"min\": 40.96028355189732}}, \"EndTime\": 1596636221.531334, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.531316}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 81.66975620814732, \"sum\": 81.66975620814732, \"min\": 81.66975620814732}}, \"EndTime\": 1596636221.531396, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.531378}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 78.62431117466518, \"sum\": 78.62431117466518, \"min\": 78.62431117466518}}, \"EndTime\": 1596636221.531459, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.531442}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 74.96218436104911, \"sum\": 74.96218436104911, \"min\": 74.96218436104911}}, \"EndTime\": 1596636221.53152, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.531502}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 77.68409946986607, \"sum\": 77.68409946986607, \"min\": 77.68409946986607}}, \"EndTime\": 1596636221.531583, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.531566}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 79.28466796875, \"sum\": 79.28466796875, \"min\": 79.28466796875}}, \"EndTime\": 1596636221.531644, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.531628}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 73.10413469587054, \"sum\": 73.10413469587054, \"min\": 73.10413469587054}}, \"EndTime\": 1596636221.531707, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.531691}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 86.6595458984375, \"sum\": 86.6595458984375, \"min\": 86.6595458984375}}, \"EndTime\": 1596636221.531768, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.531752}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 80.78604561941964, \"sum\": 80.78604561941964, \"min\": 80.78604561941964}}, \"EndTime\": 1596636221.531828, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.531811}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #quality_metric: host=algo-1, epoch=13, validation mse_objective <loss>=50.6743818011\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #early_stopping_criteria_metric: host=algo-1, epoch=13, criteria=mse_objective, value=32.2939823696\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] Saving model for epoch: 13\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] Saved checkpoint to \"/tmp/tmpybGgWu/mx-mod-0000.params\"\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #progress_metric: host=algo-1, completed 93 % of epochs\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Total Batches Seen\": {\"count\": 1, \"max\": 31, \"sum\": 31.0, \"min\": 31}, \"Total Records Seen\": {\"count\": 1, \"max\": 3605, \"sum\": 3605.0, \"min\": 3605}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Reset Count\": {\"count\": 1, \"max\": 16, \"sum\": 16.0, \"min\": 16}}, \"EndTime\": 1596636221.552833, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 13}, \"StartTime\": 1596636221.43483}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #throughput_metric: host=algo-1, train throughput=1920.81368789 records/second\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5073542404174805, \"sum\": 0.5073542404174805, \"min\": 0.5073542404174805}}, \"EndTime\": 1596636221.591176, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.591058}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4979186248779297, \"sum\": 0.4979186248779297, \"min\": 0.4979186248779297}}, \"EndTime\": 1596636221.591283, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.591264}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6338689041137695, \"sum\": 0.6338689041137695, \"min\": 0.6338689041137695}}, \"EndTime\": 1596636221.591402, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.59138}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4937399673461914, \"sum\": 0.4937399673461914, \"min\": 0.4937399673461914}}, \"EndTime\": 1596636221.591512, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.591493}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.32396926879882815, \"sum\": 0.32396926879882815, \"min\": 0.32396926879882815}}, \"EndTime\": 1596636221.591578, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.59156}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3375300598144531, \"sum\": 0.3375300598144531, \"min\": 0.3375300598144531}}, \"EndTime\": 1596636221.591643, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.591625}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.35139869689941405, \"sum\": 0.35139869689941405, \"min\": 0.35139869689941405}}, \"EndTime\": 1596636221.591706, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.59169}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3450093460083008, \"sum\": 0.3450093460083008, \"min\": 0.3450093460083008}}, \"EndTime\": 1596636221.591771, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.591753}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5626469421386718, \"sum\": 0.5626469421386718, \"min\": 0.5626469421386718}}, \"EndTime\": 1596636221.591837, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.591819}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6761129760742187, \"sum\": 0.6761129760742187, \"min\": 0.6761129760742187}}, \"EndTime\": 1596636221.591899, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.591881}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5278995513916016, \"sum\": 0.5278995513916016, \"min\": 0.5278995513916016}}, \"EndTime\": 1596636221.591959, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.591942}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5973702621459961, \"sum\": 0.5973702621459961, \"min\": 0.5973702621459961}}, \"EndTime\": 1596636221.592021, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.592004}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3155790138244629, \"sum\": 0.3155790138244629, \"min\": 0.3155790138244629}}, \"EndTime\": 1596636221.592084, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.592067}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.3046734809875488, \"sum\": 0.3046734809875488, \"min\": 0.3046734809875488}}, \"EndTime\": 1596636221.592145, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.59213}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.32352840423583984, \"sum\": 0.32352840423583984, \"min\": 0.32352840423583984}}, \"EndTime\": 1596636221.592208, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.592191}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.33864028930664064, \"sum\": 0.33864028930664064, \"min\": 0.33864028930664064}}, \"EndTime\": 1596636221.592326, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.592308}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5557180404663086, \"sum\": 0.5557180404663086, \"min\": 0.5557180404663086}}, \"EndTime\": 1596636221.592398, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.59238}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.500400390625, \"sum\": 0.500400390625, \"min\": 0.500400390625}}, \"EndTime\": 1596636221.592458, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.592442}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.6650408172607422, \"sum\": 0.6650408172607422, \"min\": 0.6650408172607422}}, \"EndTime\": 1596636221.592517, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.5925}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.567607650756836, \"sum\": 0.567607650756836, \"min\": 0.567607650756836}}, \"EndTime\": 1596636221.592587, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.592569}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5068864822387695, \"sum\": 0.5068864822387695, \"min\": 0.5068864822387695}}, \"EndTime\": 1596636221.592707, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.592688}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.344544792175293, \"sum\": 0.344544792175293, \"min\": 0.344544792175293}}, \"EndTime\": 1596636221.59278, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.592764}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.5112415313720703, \"sum\": 0.5112415313720703, \"min\": 0.5112415313720703}}, \"EndTime\": 1596636221.592853, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.592834}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.4532838439941406, \"sum\": 0.4532838439941406, \"min\": 0.4532838439941406}}, \"EndTime\": 1596636221.592925, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.592908}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9914910125732422, \"sum\": 0.9914910125732422, \"min\": 0.9914910125732422}}, \"EndTime\": 1596636221.592987, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.592971}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9526802825927735, \"sum\": 0.9526802825927735, \"min\": 0.9526802825927735}}, \"EndTime\": 1596636221.59305, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.593032}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9063607788085938, \"sum\": 0.9063607788085938, \"min\": 0.9063607788085938}}, \"EndTime\": 1596636221.593115, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.593097}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9295653533935547, \"sum\": 0.9295653533935547, \"min\": 0.9295653533935547}}, \"EndTime\": 1596636221.593179, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.593163}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.9947978973388671, \"sum\": 0.9947978973388671, \"min\": 0.9947978973388671}}, \"EndTime\": 1596636221.59325, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.593232}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 0.88765380859375, \"sum\": 0.88765380859375, \"min\": 0.88765380859375}}, \"EndTime\": 1596636221.593321, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.593302}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0399272155761718, \"sum\": 1.0399272155761718, \"min\": 1.0399272155761718}}, \"EndTime\": 1596636221.593391, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.593374}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"train_mse_objective\": {\"count\": 1, \"max\": 1.0018037414550782, \"sum\": 1.0018037414550782, \"min\": 1.0018037414550782}}, \"EndTime\": 1596636221.593455, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.59344}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #quality_metric: host=algo-1, epoch=14, train mse_objective <loss>=0.507354240417\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 50.21750749860491, \"sum\": 50.21750749860491, \"min\": 50.21750749860491}}, \"EndTime\": 1596636221.658433, \"Dimensions\": {\"model\": 0, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.6583}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 49.12467302594866, \"sum\": 49.12467302594866, \"min\": 49.12467302594866}}, \"EndTime\": 1596636221.658571, \"Dimensions\": {\"model\": 1, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.658549}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 55.626861572265625, \"sum\": 55.626861572265625, \"min\": 55.626861572265625}}, \"EndTime\": 1596636221.658634, \"Dimensions\": {\"model\": 2, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.658618}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 49.860055106026785, \"sum\": 49.860055106026785, \"min\": 49.860055106026785}}, \"EndTime\": 1596636221.658698, \"Dimensions\": {\"model\": 3, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.658682}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 28.964869907924108, \"sum\": 28.964869907924108, \"min\": 28.964869907924108}}, \"EndTime\": 1596636221.65876, \"Dimensions\": {\"model\": 4, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.658745}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 29.543725149972097, \"sum\": 29.543725149972097, \"min\": 29.543725149972097}}, \"EndTime\": 1596636221.658826, \"Dimensions\": {\"model\": 5, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.658808}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 31.041174752371653, \"sum\": 31.041174752371653, \"min\": 31.041174752371653}}, \"EndTime\": 1596636221.65889, \"Dimensions\": {\"model\": 6, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.658872}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 30.480933053152903, \"sum\": 30.480933053152903, \"min\": 30.480933053152903}}, \"EndTime\": 1596636221.658951, \"Dimensions\": {\"model\": 7, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.658934}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 52.47280011858259, \"sum\": 52.47280011858259, \"min\": 52.47280011858259}}, \"EndTime\": 1596636221.659012, \"Dimensions\": {\"model\": 8, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.658997}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 54.822862897600444, \"sum\": 54.822862897600444, \"min\": 54.822862897600444}}, \"EndTime\": 1596636221.659065, \"Dimensions\": {\"model\": 9, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.659051}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 50.58594185965402, \"sum\": 50.58594185965402, \"min\": 50.58594185965402}}, \"EndTime\": 1596636221.659118, \"Dimensions\": {\"model\": 10, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.659104}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 54.381151471819194, \"sum\": 54.381151471819194, \"min\": 54.381151471819194}}, \"EndTime\": 1596636221.659173, \"Dimensions\": {\"model\": 11, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.659158}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 28.927823747907365, \"sum\": 28.927823747907365, \"min\": 28.927823747907365}}, \"EndTime\": 1596636221.659226, \"Dimensions\": {\"model\": 12, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.659212}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 29.1507568359375, \"sum\": 29.1507568359375, \"min\": 29.1507568359375}}, \"EndTime\": 1596636221.65929, \"Dimensions\": {\"model\": 13, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.659273}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 30.076932634626115, \"sum\": 30.076932634626115, \"min\": 30.076932634626115}}, \"EndTime\": 1596636221.659353, \"Dimensions\": {\"model\": 14, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.659334}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 30.182427542550222, \"sum\": 30.182427542550222, \"min\": 30.182427542550222}}, \"EndTime\": 1596636221.659417, \"Dimensions\": {\"model\": 15, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.659401}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 50.009669712611604, \"sum\": 50.009669712611604, \"min\": 50.009669712611604}}, \"EndTime\": 1596636221.659479, \"Dimensions\": {\"model\": 16, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.659462}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 46.67543683733259, \"sum\": 46.67543683733259, \"min\": 46.67543683733259}}, \"EndTime\": 1596636221.659541, \"Dimensions\": {\"model\": 17, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.659524}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 61.94234357561384, \"sum\": 61.94234357561384, \"min\": 61.94234357561384}}, \"EndTime\": 1596636221.659603, \"Dimensions\": {\"model\": 18, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.659586}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 54.04573713030134, \"sum\": 54.04573713030134, \"min\": 54.04573713030134}}, \"EndTime\": 1596636221.659666, \"Dimensions\": {\"model\": 19, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.659649}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 41.30035400390625, \"sum\": 41.30035400390625, \"min\": 41.30035400390625}}, \"EndTime\": 1596636221.659725, \"Dimensions\": {\"model\": 20, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.659709}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 35.32218061174665, \"sum\": 35.32218061174665, \"min\": 35.32218061174665}}, \"EndTime\": 1596636221.659782, \"Dimensions\": {\"model\": 21, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.659766}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 40.35692923409598, \"sum\": 40.35692923409598, \"min\": 40.35692923409598}}, \"EndTime\": 1596636221.659838, \"Dimensions\": {\"model\": 22, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.659823}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 40.526497977120535, \"sum\": 40.526497977120535, \"min\": 40.526497977120535}}, \"EndTime\": 1596636221.659899, \"Dimensions\": {\"model\": 23, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.659881}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 81.30935232979911, \"sum\": 81.30935232979911, \"min\": 81.30935232979911}}, \"EndTime\": 1596636221.659961, \"Dimensions\": {\"model\": 24, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.659943}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 78.83721051897321, \"sum\": 78.83721051897321, \"min\": 78.83721051897321}}, \"EndTime\": 1596636221.660024, \"Dimensions\": {\"model\": 25, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.660007}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 74.58261544363839, \"sum\": 74.58261544363839, \"min\": 74.58261544363839}}, \"EndTime\": 1596636221.660085, \"Dimensions\": {\"model\": 26, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.660069}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 77.73871721540179, \"sum\": 77.73871721540179, \"min\": 77.73871721540179}}, \"EndTime\": 1596636221.660148, \"Dimensions\": {\"model\": 27, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.660131}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 79.84386335100446, \"sum\": 79.84386335100446, \"min\": 79.84386335100446}}, \"EndTime\": 1596636221.66021, \"Dimensions\": {\"model\": 28, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.660193}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 77.89127894810268, \"sum\": 77.89127894810268, \"min\": 77.89127894810268}}, \"EndTime\": 1596636221.660271, \"Dimensions\": {\"model\": 29, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.660255}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 86.89847237723214, \"sum\": 86.89847237723214, \"min\": 86.89847237723214}}, \"EndTime\": 1596636221.660332, \"Dimensions\": {\"model\": 30, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.660316}\n\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"validation_mse_objective\": {\"count\": 1, \"max\": 78.30434744698661, \"sum\": 78.30434744698661, \"min\": 78.30434744698661}}, \"EndTime\": 1596636221.660393, \"Dimensions\": {\"model\": 31, \"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.660375}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #quality_metric: host=algo-1, epoch=14, validation mse_objective <loss>=50.2175074986\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #early_stopping_criteria_metric: host=algo-1, epoch=14, criteria=mse_objective, value=28.9278237479\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] Epoch 14: Loss has not improved for 0 epochs.\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] Saving model for epoch: 14\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] Saved checkpoint to \"/tmp/tmpzzah66/mx-mod-0000.params\"\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #progress_metric: host=algo-1, completed 100 % of epochs\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Total Batches Seen\": {\"count\": 1, \"max\": 33, \"sum\": 33.0, \"min\": 33}, \"Total Records Seen\": {\"count\": 1, \"max\": 3832, \"sum\": 3832.0, \"min\": 3832}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 227, \"sum\": 227.0, \"min\": 227}, \"Reset Count\": {\"count\": 1, \"max\": 17, \"sum\": 17.0, \"min\": 17}}, \"EndTime\": 1596636221.670888, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\", \"epoch\": 14}, \"StartTime\": 1596636221.553195}\n\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #throughput_metric: host=algo-1, train throughput=1926.18017471 records/second\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 WARNING 139620994029376] wait_for_all_workers will not sync workers since the kv store is not running distributed\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 WARNING 139620994029376] wait_for_all_workers will not sync workers since the kv store is not running distributed\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #early_stopping_criteria_metric: host=algo-1, epoch=14, criteria=mse_objective, value=28.9278237479\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #validation_score (algo-1) : ('mse_objective', 27.450618198939733)\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #validation_score (algo-1) : ('mse', 27.450618198939733)\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #validation_score (algo-1) : ('absolute_loss', 3.2589547293526784)\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #quality_metric: host=algo-1, validation mse_objective <loss>=27.4506181989\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #quality_metric: host=algo-1, validation mse <loss>=27.4506181989\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] #quality_metric: host=algo-1, validation absolute_loss <loss>=3.25895472935\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] Best model found for hyperparameters: {\"lr_scheduler_step\": 10, \"wd\": 1, \"optimizer\": \"adam\", \"lr_scheduler_factor\": 0.99, \"l1\": 0.0, \"learning_rate\": 0.1, \"lr_scheduler_minimum_lr\": 1e-05}\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] Saved checkpoint to \"/tmp/tmpRjFPvU/mx-mod-0000.params\"\u001b[0m\n\u001b[34m[08/05/2020 14:03:41 INFO 139620994029376] Test data is not provided.\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"totaltime\": {\"count\": 1, \"max\": 1972.836971282959, \"sum\": 1972.836971282959, \"min\": 1972.836971282959}, \"finalize.time\": {\"count\": 1, \"max\": 70.15609741210938, \"sum\": 70.15609741210938, \"min\": 70.15609741210938}, \"initialize.time\": {\"count\": 1, \"max\": 148.88596534729004, \"sum\": 148.88596534729004, \"min\": 148.88596534729004}, \"check_early_stopping.time\": {\"count\": 16, \"max\": 1.199960708618164, \"sum\": 7.309675216674805, \"min\": 0.20194053649902344}, \"setuptime\": {\"count\": 1, \"max\": 25.851964950561523, \"sum\": 25.851964950561523, \"min\": 25.851964950561523}, \"update.time\": {\"count\": 15, \"max\": 158.1871509552002, \"sum\": 1474.7467041015625, \"min\": 62.9270076751709}, \"epochs\": {\"count\": 1, \"max\": 15, \"sum\": 15.0, \"min\": 15}}, \"EndTime\": 1596636221.749706, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"Linear Learner\"}, \"StartTime\": 1596636219.98238}\n\u001b[0m\n\n2020-08-05 14:03:53 Uploading - Uploading generated training model\n2020-08-05 14:03:53 Completed - Training job completed\nTraining seconds: 73\nBillable seconds: 73\n"
]
],
[
[
"## Step 6 (B): Deploy the trained model\n\nSimilar to the XGBoost model, now that we've fit the model we need to deploy it. Also like the XGBoost model, we will use the lower level approach so that we have more control over the endpoint that gets created.\n\n### Build the model\n\nOf course, before we can deploy the model, we need to first create it. The `fit` method that we used earlier created some model artifacts and we can use these to construct a model object.",
"_____no_output_____"
]
],
[
[
"# First, we create a unique model name\nlinear_model_name = \"boston-update-linear-model\" + strftime(\"%Y-%m-%d-%H-%M-%S\", gmtime())\n\n# We also need to tell SageMaker which container should be used for inference and where it should\n# retrieve the model artifacts from. In our case, the linear-learner container that we used for training\n# can also be used for inference.\nlinear_primary_container = {\n \"Image\": linear_container,\n \"ModelDataUrl\": linear.model_data\n}\n\n# And lastly we construct the SageMaker model\nlinear_model_info = session.sagemaker_client.create_model(\n ModelName = linear_model_name,\n ExecutionRoleArn = role,\n PrimaryContainer = linear_primary_container)",
"_____no_output_____"
]
],
[
[
"### Create the endpoint configuration\n\nOnce we have the model we can start putting together the endpoint by creating an endpoint configuration.",
"_____no_output_____"
]
],
[
[
"# As before, we need to give our endpoint configuration a name which should be unique\nlinear_endpoint_config_name = \"boston-linear-endpoint-config-\" + strftime(\"%Y-%m-%d-%H-%M-%S\", gmtime())\n\n# And then we ask SageMaker to construct the endpoint configuration\nlinear_endpoint_config_info = session.sagemaker_client.create_endpoint_config(\n EndpointConfigName = linear_endpoint_config_name,\n ProductionVariants = [{\n \"InstanceType\": \"ml.m4.xlarge\",\n \"InitialVariantWeight\": 1,\n \"InitialInstanceCount\": 1,\n \"ModelName\": linear_model_name,\n \"VariantName\": \"Linear-Model\"\n }])",
"_____no_output_____"
]
],
[
[
"### Deploy the endpoint\n\nNow that the endpoint configuration has been created, we can ask SageMaker to build our endpoint.\n\n**Note:** This is a friendly (repeated) reminder that you are about to deploy an endpoint. Make sure that you shut it down once you've finished with it!",
"_____no_output_____"
]
],
[
[
"# Again, we need a unique name for our endpoint\nendpoint_name = \"boston-update-endpoint-\" + strftime(\"%Y-%m-%d-%H-%M-%S\", gmtime())\n\n# And then we can deploy our endpoint\nendpoint_info = session.sagemaker_client.create_endpoint(\n EndpointName = endpoint_name,\n EndpointConfigName = linear_endpoint_config_name)",
"_____no_output_____"
],
[
"endpoint_dec = session.wait_for_endpoint(endpoint_name)",
"---------------!"
]
],
[
[
"## Step 7 (B): Use the model\n\nJust like with the XGBoost model, we will send some data to our endpoint to make sure that it is working properly. An important note is that the output format for the linear model is different from the XGBoost model.",
"_____no_output_____"
]
],
[
[
"response = session.sagemaker_runtime_client.invoke_endpoint(\n EndpointName = endpoint_name,\n ContentType = 'text/csv',\n Body = ','.join(map(str, X_test.values[0])))",
"_____no_output_____"
],
[
"pprint(response)",
"{'Body': <botocore.response.StreamingBody object at 0x7ffa3abed518>,\n 'ContentType': 'application/json',\n 'InvokedProductionVariant': 'Linear-Model',\n 'ResponseMetadata': {'HTTPHeaders': {'content-length': '48',\n 'content-type': 'application/json',\n 'date': 'Wed, 5 Aug 2020 14:20:07 GMT',\n 'x-amzn-invoked-production-variant': 'Linear-Model',\n 'x-amzn-requestid': '281f99c0-873f-4707-ac83-96e4400d99d5'},\n 'HTTPStatusCode': 200,\n 'RequestId': '281f99c0-873f-4707-ac83-96e4400d99d5',\n 'RetryAttempts': 0}}\n"
],
[
"result = response['Body'].read().decode(\"utf-8\")",
"_____no_output_____"
],
[
"pprint(result)",
"'{\"predictions\": [{\"score\": 18.873676300048828}]}'\n"
],
[
"Y_test.values[0]",
"_____no_output_____"
]
],
[
[
"## Shut down the endpoint\n\nNow that we know that the Linear model's endpoint works, we can shut it down.",
"_____no_output_____"
]
],
[
[
"session.sagemaker_client.delete_endpoint(EndpointName = endpoint_name)",
"_____no_output_____"
]
],
[
[
"## Step 6 (C): Deploy a combined model\n\nSo far we've constructed two separate models which we could deploy and use. Before we talk about how we can change a deployed endpoint from one configuration to another, let's consider a slightly different situation. Suppose that before we switch from using only the XGBoost model to only the Linear model, we first want to do something like an A-B test, where we send some of the incoming data to the XGBoost model and some of the data to the Linear model.\n\nFortunately, SageMaker provides this functionality. And to actually get SageMaker to do this for us is not too different from deploying a model in the way that we've already done. The only difference is that we need to list more than one model in the production variants parameter of the endpoint configuration.\n\nA reasonable question to ask is, how much data is sent to each of the models that I list in the production variants parameter? The answer is that it depends on the weight set for each model.\n\nSuppose that we have $k$ models listed in the production variants and that each model $i$ is assigned the weight $w_i$. Then each model $i$ will receive $w_i / W$ of the traffic where $W = \\sum_{i} w_i$.\n\nIn our case, since we have two models, the linear model and the XGBoost model, and each model has weight 1, we see that each model will get 1 / (1 + 1) = 1/2 of the data sent to the endpoint.",
"_____no_output_____"
]
],
[
[
"# As before, we need to give our endpoint configuration a name which should be unique\ncombined_endpoint_config_name = \"boston-combined-endpoint-config-\" + strftime(\"%Y-%m-%d-%H-%M-%S\", gmtime())\n\n# And then we ask SageMaker to construct the endpoint configuration\ncombined_endpoint_config_info = session.sagemaker_client.create_endpoint_config(\n EndpointConfigName = combined_endpoint_config_name,\n ProductionVariants = [\n { # First we include the linear model\n \"InstanceType\": \"ml.m4.xlarge\",\n \"InitialVariantWeight\": 1,\n \"InitialInstanceCount\": 1,\n \"ModelName\": linear_model_name,\n \"VariantName\": \"Linear-Model\"\n }, { # And next we include the xgb model\n \"InstanceType\": \"ml.m4.xlarge\",\n \"InitialVariantWeight\": 1,\n \"InitialInstanceCount\": 1,\n \"ModelName\": xgb_model_name,\n \"VariantName\": \"XGB-Model\"\n }])",
"_____no_output_____"
]
],
[
[
"Now that we've created the endpoint configuration, we can ask SageMaker to construct the endpoint.\n\n**Note:** This is a friendly (repeated) reminder that you are about to deploy an endpoint. Make sure that you shut it down once you've finished with it!",
"_____no_output_____"
]
],
[
[
"# Again, we need a unique name for our endpoint\nendpoint_name = \"boston-update-endpoint-\" + strftime(\"%Y-%m-%d-%H-%M-%S\", gmtime())\n\n# And then we can deploy our endpoint\nendpoint_info = session.sagemaker_client.create_endpoint(\n EndpointName = endpoint_name,\n EndpointConfigName = combined_endpoint_config_name)",
"_____no_output_____"
],
[
"endpoint_dec = session.wait_for_endpoint(endpoint_name)",
"---------------!"
]
],
[
[
"## Step 7 (C): Use the model\n\nNow that we've constructed an endpoint which sends data to both the XGBoost model and the linear model we can send some data to the endpoint and see what sort of results we get back.",
"_____no_output_____"
]
],
[
[
"response = session.sagemaker_runtime_client.invoke_endpoint(\n EndpointName = endpoint_name,\n ContentType = 'text/csv',\n Body = ','.join(map(str, X_test.values[0])))\npprint(response)",
"{'Body': <botocore.response.StreamingBody object at 0x7ffa3ad9f2b0>,\n 'ContentType': 'application/json',\n 'InvokedProductionVariant': 'Linear-Model',\n 'ResponseMetadata': {'HTTPHeaders': {'content-length': '48',\n 'content-type': 'application/json',\n 'date': 'Wed, 5 Aug 2020 15:13:47 GMT',\n 'x-amzn-invoked-production-variant': 'Linear-Model',\n 'x-amzn-requestid': 'f076dc76-d8db-4f19-8ba2-db5e031dc726'},\n 'HTTPStatusCode': 200,\n 'RequestId': 'f076dc76-d8db-4f19-8ba2-db5e031dc726',\n 'RetryAttempts': 0}}\n"
]
],
[
[
"Since looking at a single response doesn't give us a clear look at what is happening, we can instead take a look at a few different responses to our endpoint",
"_____no_output_____"
]
],
[
[
"for rec in range(10):\n response = session.sagemaker_runtime_client.invoke_endpoint(\n EndpointName = endpoint_name,\n ContentType = 'text/csv',\n Body = ','.join(map(str, X_test.values[rec])))\n pprint(response)\n result = response['Body'].read().decode(\"utf-8\")\n print(result)\n print(Y_test.values[rec])",
"{'Body': <botocore.response.StreamingBody object at 0x7ffa3ad9fb00>,\n 'ContentType': 'application/json',\n 'InvokedProductionVariant': 'Linear-Model',\n 'ResponseMetadata': {'HTTPHeaders': {'content-length': '48',\n 'content-type': 'application/json',\n 'date': 'Wed, 5 Aug 2020 15:14:39 GMT',\n 'x-amzn-invoked-production-variant': 'Linear-Model',\n 'x-amzn-requestid': 'a4c1549b-edc4-4eb9-b8e3-51f76a136d35'},\n 'HTTPStatusCode': 200,\n 'RequestId': 'a4c1549b-edc4-4eb9-b8e3-51f76a136d35',\n 'RetryAttempts': 0}}\n{\"predictions\": [{\"score\": 18.873676300048828}]}\n[19.8]\n{'Body': <botocore.response.StreamingBody object at 0x7ffa3ad9fc50>,\n 'ContentType': 'text/csv; charset=utf-8',\n 'InvokedProductionVariant': 'XGB-Model',\n 'ResponseMetadata': {'HTTPHeaders': {'content-length': '13',\n 'content-type': 'text/csv; charset=utf-8',\n 'date': 'Wed, 5 Aug 2020 15:14:39 GMT',\n 'x-amzn-invoked-production-variant': 'XGB-Model',\n 'x-amzn-requestid': '0d730114-0d21-4780-b3b7-f6ab03896c6c'},\n 'HTTPStatusCode': 200,\n 'RequestId': '0d730114-0d21-4780-b3b7-f6ab03896c6c',\n 'RetryAttempts': 0}}\n12.8794584274\n[15.2]\n{'Body': <botocore.response.StreamingBody object at 0x7ffa3ad9fe80>,\n 'ContentType': 'text/csv; charset=utf-8',\n 'InvokedProductionVariant': 'XGB-Model',\n 'ResponseMetadata': {'HTTPHeaders': {'content-length': '13',\n 'content-type': 'text/csv; charset=utf-8',\n 'date': 'Wed, 5 Aug 2020 15:14:39 GMT',\n 'x-amzn-invoked-production-variant': 'XGB-Model',\n 'x-amzn-requestid': '8c22e474-85ba-4ec7-b951-ac00ce6b6f59'},\n 'HTTPStatusCode': 200,\n 'RequestId': '8c22e474-85ba-4ec7-b951-ac00ce6b6f59',\n 'RetryAttempts': 0}}\n31.1261882782\n[30.1]\n{'Body': <botocore.response.StreamingBody object at 0x7ffa3ad920f0>,\n 'ContentType': 'text/csv; charset=utf-8',\n 'InvokedProductionVariant': 'XGB-Model',\n 'ResponseMetadata': {'HTTPHeaders': {'content-length': '13',\n 'content-type': 'text/csv; charset=utf-8',\n 'date': 'Wed, 5 Aug 2020 15:14:39 GMT',\n 'x-amzn-invoked-production-variant': 'XGB-Model',\n 'x-amzn-requestid': '60264441-7376-4e3d-9c5d-402055978503'},\n 'HTTPStatusCode': 200,\n 'RequestId': '60264441-7376-4e3d-9c5d-402055978503',\n 'RetryAttempts': 0}}\n20.5612754822\n[23.1]\n{'Body': <botocore.response.StreamingBody object at 0x7ffa3ad92320>,\n 'ContentType': 'text/csv; charset=utf-8',\n 'InvokedProductionVariant': 'XGB-Model',\n 'ResponseMetadata': {'HTTPHeaders': {'content-length': '13',\n 'content-type': 'text/csv; charset=utf-8',\n 'date': 'Wed, 5 Aug 2020 15:14:39 GMT',\n 'x-amzn-invoked-production-variant': 'XGB-Model',\n 'x-amzn-requestid': 'baef273f-fed5-4145-b30f-14189d3e63d7'},\n 'HTTPStatusCode': 200,\n 'RequestId': 'baef273f-fed5-4145-b30f-14189d3e63d7',\n 'RetryAttempts': 0}}\n31.3781528473\n[34.9]\n{'Body': <botocore.response.StreamingBody object at 0x7ffa3ad92550>,\n 'ContentType': 'text/csv; charset=utf-8',\n 'InvokedProductionVariant': 'XGB-Model',\n 'ResponseMetadata': {'HTTPHeaders': {'content-length': '13',\n 'content-type': 'text/csv; charset=utf-8',\n 'date': 'Wed, 5 Aug 2020 15:14:39 GMT',\n 'x-amzn-invoked-production-variant': 'XGB-Model',\n 'x-amzn-requestid': '2f7fedb6-b0e4-4db0-8265-aaac54353545'},\n 'HTTPStatusCode': 200,\n 'RequestId': '2f7fedb6-b0e4-4db0-8265-aaac54353545',\n 'RetryAttempts': 0}}\n20.6239814758\n[24.4]\n{'Body': <botocore.response.StreamingBody object at 0x7ffa3ad92780>,\n 'ContentType': 'text/csv; charset=utf-8',\n 'InvokedProductionVariant': 'XGB-Model',\n 'ResponseMetadata': {'HTTPHeaders': {'content-length': '13',\n 'content-type': 'text/csv; charset=utf-8',\n 'date': 'Wed, 5 Aug 2020 15:14:39 GMT',\n 'x-amzn-invoked-production-variant': 'XGB-Model',\n 'x-amzn-requestid': '0b5784ff-9144-492e-a91f-ac2df144e6ab'},\n 'HTTPStatusCode': 200,\n 'RequestId': '0b5784ff-9144-492e-a91f-ac2df144e6ab',\n 'RetryAttempts': 0}}\n28.2774791718\n[50.]\n{'Body': <botocore.response.StreamingBody object at 0x7ffa3ad9f518>,\n 'ContentType': 'text/csv; charset=utf-8',\n 'InvokedProductionVariant': 'XGB-Model',\n 'ResponseMetadata': {'HTTPHeaders': {'content-length': '13',\n 'content-type': 'text/csv; charset=utf-8',\n 'date': 'Wed, 5 Aug 2020 15:14:39 GMT',\n 'x-amzn-invoked-production-variant': 'XGB-Model',\n 'x-amzn-requestid': '387749f0-0b4f-4dec-bc1b-70aff58f9007'},\n 'HTTPStatusCode': 200,\n 'RequestId': '387749f0-0b4f-4dec-bc1b-70aff58f9007',\n 'RetryAttempts': 0}}\n22.4035930634\n[22.4]\n{'Body': <botocore.response.StreamingBody object at 0x7ffa3ad9f9b0>,\n 'ContentType': 'text/csv; charset=utf-8',\n 'InvokedProductionVariant': 'XGB-Model',\n 'ResponseMetadata': {'HTTPHeaders': {'content-length': '13',\n 'content-type': 'text/csv; charset=utf-8',\n 'date': 'Wed, 5 Aug 2020 15:14:39 GMT',\n 'x-amzn-invoked-production-variant': 'XGB-Model',\n 'x-amzn-requestid': 'c766c5b7-ea99-4e6f-a96f-dd0bc0a8830d'},\n 'HTTPStatusCode': 200,\n 'RequestId': 'c766c5b7-ea99-4e6f-a96f-dd0bc0a8830d',\n 'RetryAttempts': 0}}\n35.3886871338\n[50.]\n{'Body': <botocore.response.StreamingBody object at 0x7ffa3ad922b0>,\n 'ContentType': 'application/json',\n 'InvokedProductionVariant': 'Linear-Model',\n 'ResponseMetadata': {'HTTPHeaders': {'content-length': '48',\n 'content-type': 'application/json',\n 'date': 'Wed, 5 Aug 2020 15:14:39 GMT',\n 'x-amzn-invoked-production-variant': 'Linear-Model',\n 'x-amzn-requestid': '2380a784-b31d-4a76-bb41-46d16527ef99'},\n 'HTTPStatusCode': 200,\n 'RequestId': '2380a784-b31d-4a76-bb41-46d16527ef99',\n 'RetryAttempts': 0}}\n{\"predictions\": [{\"score\": 22.775558471679688}]}\n[21.2]\n"
]
],
[
[
"If at some point we aren't sure about the properties of a deployed endpoint, we can use the `describe_endpoint` function to get SageMaker to return a description of the deployed endpoint.",
"_____no_output_____"
]
],
[
[
"pprint(session.sagemaker_client.describe_endpoint(EndpointName=endpoint_name))",
"{'CreationTime': datetime.datetime(2020, 8, 5, 14, 33, 48, 662000, tzinfo=tzlocal()),\n 'EndpointArn': 'arn:aws:sagemaker:eu-west-2:519526115051:endpoint/boston-update-endpoint-2020-08-05-14-33-48',\n 'EndpointConfigName': 'boston-combined-endpoint-config-2020-08-05-14-33-08',\n 'EndpointName': 'boston-update-endpoint-2020-08-05-14-33-48',\n 'EndpointStatus': 'InService',\n 'LastModifiedTime': datetime.datetime(2020, 8, 5, 14, 40, 58, 135000, tzinfo=tzlocal()),\n 'ProductionVariants': [{'CurrentInstanceCount': 1,\n 'CurrentWeight': 1.0,\n 'DeployedImages': [{'ResolutionTime': datetime.datetime(2020, 8, 5, 14, 33, 50, 488000, tzinfo=tzlocal()),\n 'ResolvedImage': '644912444149.dkr.ecr.eu-west-2.amazonaws.com/linear-learner@sha256:d74b4f09e72e0461fb55920965950b72e53c9942ac73a7a2c057b028cad0adac',\n 'SpecifiedImage': '644912444149.dkr.ecr.eu-west-2.amazonaws.com/linear-learner:1'}],\n 'DesiredInstanceCount': 1,\n 'DesiredWeight': 1.0,\n 'VariantName': 'Linear-Model'},\n {'CurrentInstanceCount': 1,\n 'CurrentWeight': 1.0,\n 'DeployedImages': [{'ResolutionTime': datetime.datetime(2020, 8, 5, 14, 33, 50, 700000, tzinfo=tzlocal()),\n 'ResolvedImage': '644912444149.dkr.ecr.eu-west-2.amazonaws.com/xgboost@sha256:52e3bbc5f2a9462bed15983634d6f615439afd4e81e8778337d734c93083bd5e',\n 'SpecifiedImage': '644912444149.dkr.ecr.eu-west-2.amazonaws.com/xgboost:1'}],\n 'DesiredInstanceCount': 1,\n 'DesiredWeight': 1.0,\n 'VariantName': 'XGB-Model'}],\n 'ResponseMetadata': {'HTTPHeaders': {'content-length': '1160',\n 'content-type': 'application/x-amz-json-1.1',\n 'date': 'Wed, 05 Aug 2020 15:15:38 GMT',\n 'x-amzn-requestid': '56fc08de-953b-4391-ab32-3ac3c10a63cb'},\n 'HTTPStatusCode': 200,\n 'RequestId': '56fc08de-953b-4391-ab32-3ac3c10a63cb',\n 'RetryAttempts': 0}}\n"
]
],
[
[
"## Updating an Endpoint\n\nNow suppose that we've done our A-B test and the new linear model is working well enough. What we'd like to do now is to switch our endpoint from sending data to both the XGBoost model and the linear model to sending data only to the linear model.\n\nOf course, we don't really want to shut down the endpoint to do this as doing so would interrupt service to whoever depends on our endpoint. Instead, we can ask SageMaker to **update** an endpoint to a new endpoint configuration.\n\nWhat is actually happening is that SageMaker will set up a new endpoint with the new characteristics. Once this new endpoint is running, SageMaker will switch the old endpoint so that it now points at the newly deployed model, making sure that this happens seamlessly in the background.",
"_____no_output_____"
]
],
[
[
"session.sagemaker_client.update_endpoint(EndpointName=endpoint_name, EndpointConfigName=linear_endpoint_config_name)",
"_____no_output_____"
]
],
[
[
"To get a glimpse at what is going on, we can ask SageMaker to describe our in-use endpoint now, before the update process has completed. When we do so, we can see that the in-use endpoint still has the same characteristics it had before.",
"_____no_output_____"
]
],
[
[
"pprint(session.sagemaker_client.describe_endpoint(EndpointName=endpoint_name))",
"{'CreationTime': datetime.datetime(2020, 8, 5, 14, 33, 48, 662000, tzinfo=tzlocal()),\n 'EndpointArn': 'arn:aws:sagemaker:eu-west-2:519526115051:endpoint/boston-update-endpoint-2020-08-05-14-33-48',\n 'EndpointConfigName': 'boston-combined-endpoint-config-2020-08-05-14-33-08',\n 'EndpointName': 'boston-update-endpoint-2020-08-05-14-33-48',\n 'EndpointStatus': 'Updating',\n 'LastModifiedTime': datetime.datetime(2020, 8, 5, 15, 19, 4, 865000, tzinfo=tzlocal()),\n 'ProductionVariants': [{'CurrentInstanceCount': 1,\n 'CurrentWeight': 1.0,\n 'DeployedImages': [{'ResolutionTime': datetime.datetime(2020, 8, 5, 14, 33, 50, 488000, tzinfo=tzlocal()),\n 'ResolvedImage': '644912444149.dkr.ecr.eu-west-2.amazonaws.com/linear-learner@sha256:d74b4f09e72e0461fb55920965950b72e53c9942ac73a7a2c057b028cad0adac',\n 'SpecifiedImage': '644912444149.dkr.ecr.eu-west-2.amazonaws.com/linear-learner:1'}],\n 'DesiredInstanceCount': 1,\n 'DesiredWeight': 1.0,\n 'VariantName': 'Linear-Model'},\n {'CurrentInstanceCount': 1,\n 'CurrentWeight': 1.0,\n 'DeployedImages': [{'ResolutionTime': datetime.datetime(2020, 8, 5, 14, 33, 50, 700000, tzinfo=tzlocal()),\n 'ResolvedImage': '644912444149.dkr.ecr.eu-west-2.amazonaws.com/xgboost@sha256:52e3bbc5f2a9462bed15983634d6f615439afd4e81e8778337d734c93083bd5e',\n 'SpecifiedImage': '644912444149.dkr.ecr.eu-west-2.amazonaws.com/xgboost:1'}],\n 'DesiredInstanceCount': 1,\n 'DesiredWeight': 1.0,\n 'VariantName': 'XGB-Model'}],\n 'ResponseMetadata': {'HTTPHeaders': {'content-length': '1159',\n 'content-type': 'application/x-amz-json-1.1',\n 'date': 'Wed, 05 Aug 2020 15:19:56 GMT',\n 'x-amzn-requestid': '1ebaa875-8360-40ce-8276-1be9b2b75ea8'},\n 'HTTPStatusCode': 200,\n 'RequestId': '1ebaa875-8360-40ce-8276-1be9b2b75ea8',\n 'RetryAttempts': 0}}\n"
]
],
[
[
"If we now wait for the update process to complete, and then ask SageMaker to describe the endpoint, it will return the characteristics of the new endpoint configuration.",
"_____no_output_____"
]
],
[
[
"endpoint_dec = session.wait_for_endpoint(endpoint_name)",
"-------------!"
],
[
"pprint(session.sagemaker_client.describe_endpoint(EndpointName=endpoint_name))",
"{'CreationTime': datetime.datetime(2020, 8, 5, 14, 33, 48, 662000, tzinfo=tzlocal()),\n 'EndpointArn': 'arn:aws:sagemaker:eu-west-2:519526115051:endpoint/boston-update-endpoint-2020-08-05-14-33-48',\n 'EndpointConfigName': 'boston-linear-endpoint-config-2020-08-05-14-06-51',\n 'EndpointName': 'boston-update-endpoint-2020-08-05-14-33-48',\n 'EndpointStatus': 'InService',\n 'LastModifiedTime': datetime.datetime(2020, 8, 5, 15, 26, 12, 635000, tzinfo=tzlocal()),\n 'ProductionVariants': [{'CurrentInstanceCount': 1,\n 'CurrentWeight': 1.0,\n 'DeployedImages': [{'ResolutionTime': datetime.datetime(2020, 8, 5, 15, 19, 7, 5000, tzinfo=tzlocal()),\n 'ResolvedImage': '644912444149.dkr.ecr.eu-west-2.amazonaws.com/linear-learner@sha256:d74b4f09e72e0461fb55920965950b72e53c9942ac73a7a2c057b028cad0adac',\n 'SpecifiedImage': '644912444149.dkr.ecr.eu-west-2.amazonaws.com/linear-learner:1'}],\n 'DesiredInstanceCount': 1,\n 'DesiredWeight': 1.0,\n 'VariantName': 'Linear-Model'}],\n 'ResponseMetadata': {'HTTPHeaders': {'content-length': '770',\n 'content-type': 'application/x-amz-json-1.1',\n 'date': 'Wed, 05 Aug 2020 15:26:35 GMT',\n 'x-amzn-requestid': 'e962776e-e8af-49de-8bdd-64490bb10a22'},\n 'HTTPStatusCode': 200,\n 'RequestId': 'e962776e-e8af-49de-8bdd-64490bb10a22',\n 'RetryAttempts': 0}}\n"
]
],
[
[
"## Shut down the endpoint\n\nNow that we've finished, we need to make sure to shut down the endpoint.",
"_____no_output_____"
]
],
[
[
"session.sagemaker_client.delete_endpoint(EndpointName = endpoint_name)",
"_____no_output_____"
]
],
[
[
"## Optional: Clean up\n\nThe default notebook instance on SageMaker doesn't have a lot of excess disk space available. As you continue to complete and execute notebooks you will eventually fill up this disk space, leading to errors which can be difficult to diagnose. Once you are completely finished using a notebook it is a good idea to remove the files that you created along the way. Of course, you can do this from the terminal or from the notebook hub if you would like. The cell below contains some commands to clean up the created files from within the notebook.",
"_____no_output_____"
]
],
[
[
"# First we will remove all of the files contained in the data_dir directory\n!rm $data_dir/*\n\n# And then we delete the directory itself\n!rmdir $data_dir",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0d7a4344e47f8e71a22024e31bf0ca3912b3160 | 29,730 | ipynb | Jupyter Notebook | notebooks/rbinz_heco.ipynb | catalyst-cooperative/rbinz-heco | 73dd03859f314747a157e50da99bd6e4c2d4a983 | [
"MIT"
] | null | null | null | notebooks/rbinz_heco.ipynb | catalyst-cooperative/rbinz-heco | 73dd03859f314747a157e50da99bd6e4c2d4a983 | [
"MIT"
] | 5 | 2019-12-09T17:20:03.000Z | 2019-12-18T17:16:59.000Z | notebooks/rbinz_heco.ipynb | catalyst-cooperative/rbinz-heco | 73dd03859f314747a157e50da99bd6e4c2d4a983 | [
"MIT"
] | null | null | null | 31.493644 | 395 | 0.542852 | [
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"import sys\nimport importlib\nimport re\n\nimport pandas as pd\nimport sqlalchemy as sa\n\nimport pudl",
"_____no_output_____"
],
[
"import logging\nlogger = logging.getLogger()\nlogger.setLevel(logging.INFO)\nhandler = logging.StreamHandler(stream=sys.stdout)\nformatter = logging.Formatter('%(message)s')\nhandler.setFormatter(formatter)\nlogger.handlers = [handler]",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport matplotlib as mpl\nimport seaborn as sns\nsns.set()\n%matplotlib inline",
"_____no_output_____"
],
[
"mpl.rcParams['figure.figsize'] = (8,8)\nmpl.rcParams['figure.dpi'] = 150\npd.options.display.max_columns = 100\npd.options.display.max_rows = 100",
"_____no_output_____"
],
[
"pudl_settings = pudl.workspace.setup.get_defaults()\nferc1_engine = sa.create_engine(pudl_settings['ferc1_db'])\npudl_engine = sa.create_engine(pudl_settings['pudl_db'])\npudl_settings",
"_____no_output_____"
]
],
[
[
"# Sales by Rate Schedule",
"_____no_output_____"
],
[
"### Request 2:\n* **p304 L8(b)** TOTAL RESIDENTIAL - MWh Sold (difficult b/c rate schedules change)\n* **p304 L8(c)** TOTAL RESIDENTIAL - Revenue (difficult b/c rate schedules change)\n* **p304 L8(d)** TOTAL RESIDENTIAL - Avg Number of Customers (difficult b/c rate schedules change)\n* **p304 L24(b)** TOTAL SM/LG C&I - MWh Sold (difficult b/c rate schedules change)\n* **p304 L24(c)** TOTAL SM/LG C&I - Revenue (difficult b/c rate schedules change)\n* **p304 L24(d)** TOTAL SM/LG C&I - Avg. Number of Customers (difficult b/c rate schedules change)\n* **p304 L43(b)** TOTAL (MWh Sold?) (doable b/c it's a specific designated row)\n* **p304 L43(c)** TOTAL (Revenue?) (doable b/c it's a specific designated row)\n* **p304 L43(d)** TOTAL (Avg. Number of Customers?) (doable b/c it's a specific designated row)\n\nNote that due to lack of any kind of standardization or categorization in the rate schedule nomenclature, and the variety that exists between all the different states and across years, it is extremely difficult to extract anything useful from this table. Even the `total` line is kind of a mess. However, the same information can be pulled from the `f1_elctrc_oper_rev` table (p300) below.",
"_____no_output_____"
]
],
[
[
"sales_by_sched = (\n pd.read_sql(\"f1_sales_by_sched\", ferc1_engine).\n pipe(pudl.transform.ferc1.unpack_table,\n table_name=\"f1_sales_by_sched\",\n data_cols=[\n \"mwh_sold\",\n \"revenue\",\n \"avg_num_cstmr\"\n ],\n data_rows=[\n \"total\"\n ])\n .droplevel(1, axis=\"columns\")\n .dropna(how=\"all\")\n .query(\"report_prd==12\")\n .reset_index()\n .set_index([\"respondent_id\", \"report_year\", \"spplmnt_num\"])\n .drop(\"report_prd\", axis=\"columns\")\n .rename(columns={\n \"avg_num_cstmr\": \"avg_customers\",\n })\n .astype({\"avg_customers\": \"Int64\"})\n)",
"_____no_output_____"
]
],
[
[
"# O&M Expenses",
"_____no_output_____"
],
[
"### Request 2:\n* **p320 L5(b)** (Steam) Fuel (501)\n* **p320 L25(b)** (Nuclear) Fuel (518)\n* **p320 L63(b)** (Other) Fuel (547)\n* **p321 L76(b)** Purchased Power (555)",
"_____no_output_____"
]
],
[
[
"elec_oandm = (\n pd.read_sql(\"f1_elc_op_mnt_expn\", ferc1_engine)\n .pipe(\n pudl.transform.ferc1.unpack_table,\n table_name=\"f1_elc_op_mnt_expn\",\n data_cols=[\n \"crnt_yr_amt\"\n ],\n data_rows=[\n \"production_steam_ops_acct501_fuel\",\n \"production_nuclear_ops_acct518_fuel\",\n \"production_other_ops_acct547_fuel\",\n \"production_supply_acct555_purchased_power\",\n ])\n .droplevel(0, axis=\"columns\")\n .dropna(how=\"all\")\n .query(\"report_prd==12\")\n .reset_index()\n .set_index([\"respondent_id\", \"report_year\"])\n .drop([\"report_prd\", \"spplmnt_num\"], axis=\"columns\")\n .rename_axis(columns=None)\n)",
"_____no_output_____"
]
],
[
[
"# Operating Revenues",
"_____no_output_____"
],
[
"### Request 2:\n* **p300 L10(b,d,f)** TOTAL Sales to Ultimate Consumers (revenues, MWh, avg customers)\n* **p300 L11(b,d,f)** Sales for Resale (447) (revenues, MWh, avg customers)\n* **p300 L12(b,d,f)** TOTAL Sales of Electricity (revenues, MWh, avg customers)\n\n### Additional Data in lieu of `f1_sales_by_sched`\n* **p300 L2(b,d,f)** Residential Sales (440) (revenues, MWh, avg customers)\n* **p300 L4(b,d,f)** Commercial and Industrial Sales, Small (442) (revenues, MWh, avg customers)\n* **p300 L5(b,d,f)** Commercial and Industrial Sales, Large (442) (revenues, MWh, avg customers)\n",
"_____no_output_____"
]
],
[
[
"sales_hierarchical = (\n pd.read_sql(\"f1_elctrc_oper_rev\", ferc1_engine)\n .pipe(\n pudl.transform.ferc1.unpack_table,\n table_name=\"f1_elctrc_oper_rev\",\n data_cols=[\n \"rev_amt_crnt_yr\",\n \"mwh_sold_crnt_yr\",\n \"avg_cstmr_crntyr\"\n ],\n data_rows=[\n \"sales_acct440_residential\",\n \"sales_acct442_commercial_industrial_small\",\n \"sales_acct442_commercial_industrial_large\",\n \"sales_acct447_for_resale\",\n \"sales_ultimate_consumers_total\",\n \"sales_of_electricity_total\",\n \"sales_revenues_net_total\",\n ])\n .dropna(how=\"all\")\n .query(\"report_prd==12\")\n .reset_index()\n .set_index([\"respondent_id\", \"report_year\"])\n .drop([(\"report_prd\", \"\"), (\"spplmnt_num\", \"\")], axis=\"columns\")\n .rename_axis(columns=[None, None])\n .assign(avg_cstmr_crntyr=lambda x: x.loc[:, \"avg_cstmr_crntyr\"].astype(\"Int64\"))\n)\n\nsales_revenue = (\n sales_hierarchical.loc[:, \"rev_amt_crnt_yr\"]\n .add_suffix(\"_revenue\")\n)\nsales_mwh = (\n sales_hierarchical.loc[:, \"mwh_sold_crnt_yr\"]\n .add_suffix(\"_mwh\")\n)\nsales_customers = (\n sales_hierarchical.loc[:, \"avg_cstmr_crntyr\"]\n .add_suffix(\"_customers\")\n)\n\nelec_sales = pd.concat([sales_revenue, sales_mwh, sales_customers], axis=\"columns\")\nelec_sales = elec_sales.loc[:,elec_sales.columns.sort_values()]",
"_____no_output_____"
]
],
[
[
"# Income Statements",
"_____no_output_____"
],
[
"## Request 1\n* **p115 L2(g+h)** Electric Operating Revenues Current+Prior Year\n* **p115 L?(g+h)** Electric Operating Expenses Current+Prior Year\n* **p115 L26(g+h)** Electric Net Util Oper Inc Current Year Current+Prior Year\n\n## Request 2\n* **p114 L6(g)** Depreciation Expense (403)\n* **p114 L7(g)** Depreciation Expense for Asset Retirement Costs (403.1)\n* **p114 L8(g)** Amort. & Depl. of Utility Plant (404-405)\n* **p114 L9(g)** Amort. of Uitlity Plant Acq. Adj. (406)\n* **p114 L10(g)** Amort. of Property Losses, Urecov Plant and Reg. Study Costs (407)\n* **p114 L11(g)** Amort. of Conversion Expenses (407)\n* **p114 L14(g)** Taxes other than Income taxes (408.1)\n* **p114 L15(g)** Income Taxes - Federal (409.1)\n* **p114 L16(g)** Other (409.1)\n* **p114 L17(g)** Provision for Deferred Income Taxes (410.1)\n* **p114 L18(g)** (Less) Provision for Deferred Tax Credits (411.1)\n* **p114 L19(g)** Investment Tax Credit Adj. - Net (411.4) ",
"_____no_output_____"
]
],
[
[
"elec_income = (\n pd.read_sql(\"f1_income_stmnt\", ferc1_engine)\n .pipe(\n pudl.transform.ferc1.unpack_table,\n table_name=\"f1_income_stmnt\",\n data_cols = [\n \"cy_elctrc_total\",\n ],\n data_rows = [\n \"operating_revenues_acct400\",\n \"depreciation_expenses_acct403\",\n \"depreciation_expenses_asset_retirement_acct403_1\",\n \"amortization_depletion_utility_plant_acct404_405\",\n \"amortization_utility_plant_acquired_acct406\",\n \"amortized_conversion_expenses_acct407\",\n \"amortized_losses_acct407\",\n \"non_income_tax_acct408_1\",\n \"federal_income_tax_acct409_1\",\n \"other_acct409_1\",\n \"deferred_income_tax_acct410_1\",\n \"deferred_income_tax_credit_acct411_1\",\n \"investment_tax_credit_acct411_4\",\n \"utility_operating_expenses_total\",\n \"net_utility_operating_income\",\n ])\n .dropna(how=\"all\")\n .query(\"report_prd==12\")\n .droplevel(0, axis=\"columns\")\n .reset_index()\n .set_index([\"respondent_id\", \"report_year\"])\n .drop([\"report_prd\", \"spplmnt_num\"], axis=\"columns\")\n .rename_axis(columns=None)\n)",
"_____no_output_____"
]
],
[
[
"# Depreciation",
"_____no_output_____"
],
[
"### Request 1:\n* **P336 L12(f)** Electric Depreciation Expense",
"_____no_output_____"
]
],
[
[
"elec_depreciation = (\n pd.read_sql(\"f1_dacs_epda\", ferc1_engine)\n .pipe(\n pudl.transform.ferc1.unpack_table,\n table_name=\"f1_dacs_epda\",\n data_cols=['total'],\n data_rows=[\"total_electric_plant\"]\n )\n .dropna(how=\"all\")\n .query(\"report_prd==12\")\n .droplevel(0, axis=\"columns\")\n .reset_index()\n .set_index([\"respondent_id\", \"report_year\"])\n .drop([\"report_prd\", \"spplmnt_num\"], axis=\"columns\")\n .rename_axis(columns=None)\n)",
"_____no_output_____"
]
],
[
[
"# Plant in Service",
"_____no_output_____"
],
[
"### Request 1:\n* **P206 L104(b)** TOTAL Electric Plant in Service Bal Beginning of Year\n* **P206 L104(g)** TOTAL Electric Plant in Service Bal End of Year",
"_____no_output_____"
]
],
[
[
"elec_plant_in_service = (\n pd.read_sql(\"f1_plant_in_srvce\", ferc1_engine)\n .pipe(\n pudl.transform.ferc1.unpack_table,\n table_name=\"f1_plant_in_srvce\",\n data_cols=[\"begin_yr_bal\", \"yr_end_bal\"],\n data_rows=[\"electric_plant_in_service_total\"]\n )\n .dropna(how=\"all\")\n .query(\"report_prd==12\")\n .droplevel(1, axis=\"columns\")\n .reset_index()\n .set_index([\"respondent_id\", \"report_year\"])\n .drop([\"report_prd\", \"spplmnt_num\"], axis=\"columns\")\n .rename(columns={\n \"begin_yr_bal\": \"starting_balance\",\n \"yr_end_bal\": \"ending_balance\",\n })\n)",
"_____no_output_____"
]
],
[
[
"# EIA 861 Sales by Customer Class",
"_____no_output_____"
],
[
"## Request 2:\n\n**Pull:**\n * Annual revenues (USD)\n * Annual sales (MWh)\n * Annual customers counts\n\n**For each of:**\n * Residential customers\n * Commercial customers\n * Industrial customers\n * All Customers (even though are others like Transportation sales in there?)",
"_____no_output_____"
]
],
[
[
"import pudl.extract.eia861\neia861_years = range(1999,2019)\neia861_extractor = pudl.extract.eia861.ExtractorExcel(\n dataset_name=\"eia861\",\n years=eia861_years,\n pudl_settings=pudl_settings\n)\neia861_dfs = eia861_extractor.create_dfs(years=eia861_years)\nsales_eia861 = eia861_dfs[\"sales_eia861_states\"]\ncols = [\n \"report_year\",\n \"utility_id_eia\",\n \"state\",\n \n \"residential_revenues\",\n \"residential_sales_mwh\",\n \"residential_customers\",\n \n \"commercial_revenues\",\n \"commercial_sales_mwh\",\n \"commercial_customers\",\n \n \"industrial_revenues\",\n \"industrial_sales_mwh\",\n \"industrial_customers\",\n \n \"transportation_revenues\",\n \"transportation_sales_mwh\",\n \"transportation_customers\",\n \n \"other_revenues\",\n \"other_sales_mwh\",\n \"other_customers\",\n \n \"total_revenues\",\n \"total_sales_mwh\",\n \"total_customers\",\n]\nsales_eia861 = (\n sales_eia861.loc[:,cols].reset_index(drop=True)\n .query(\"utility_id_eia not in (88888, 99999)\")\n .assign(report_year=lambda x: pd.to_numeric(x.report_year, errors=\"coerce\"))\n .dropna(subset=[\"report_year\", \"utility_id_eia\"])\n .astype({\"report_year\": int, \"utility_id_eia\": int}, errors=\"ignore\")\n)\n\nrev_cols = sales_eia861.filter(regex=\".*_revenues$\").columns\nfor col in rev_cols:\n sales_eia861.loc[:,col] = 1000.0 * pd.to_numeric(sales_eia861[col], errors=\"coerce\")\n\ncust_cols = sales_eia861.filter(regex=\".*_customers$\").columns\nfor col in cust_cols:\n sales_eia861.loc[:,col] = pd.to_numeric(sales_eia861[col], errors=\"coerce\").astype(\"Int64\")\n\nmwh_cols = sales_eia861.filter(regex=\".*_sales_mwh$\").columns\nfor col in mwh_cols:\n sales_eia861.loc[:,col] = pd.to_numeric(sales_eia861[col], errors=\"coerce\")\n\n#new_df = pd.DataFrame()\n#for customer_class in [\"residential\", \"commercial\", \"industrial\", \"transportation\", \"other\", \"total\"]:\n# tmp_df = (\n# sales_eia861.set_index([\"utility_id_eia\", \"report_year\", \"state\"])\n# .filter(regex=f\"^{customer_class}_.*\")\n# .assign(customer_class=customer_class)\n# .rename(columns=lambda x: re.sub(f\"^{customer_class}_\", \"\", x))\n# )\n# new_df = new_df.append(tmp_df)\n\n#sales_eia861 = (\n# new_df.reset_index()\n# .assign(\n# revenues=lambda x: 1000.0 * pd.to_numeric(x.revenues, errors=\"coerce\"),\n# customers=lambda x: pd.to_numeric(x.customers, errors=\"coerce\"),\n# sales_mwh=lambda x: pd.to_numeric(x.sales_mwh, errors=\"coerce\")\n# )\n# .astype({\"customers\": \"Int64\"})\n# .set_index([\"utility_id_eia\", \"state\", \"report_year\", \"customer_class\"])\n# .reset_index()\n#)",
"_____no_output_____"
],
[
"ferc1_dfs = {\n \"elec_oandm_ferc1\": elec_oandm,\n \"elec_sales_ferc1\": elec_sales,\n \"elec_income_ferc1\": elec_income,\n \"elec_depreciation_ferc1\": elec_depreciation,\n \"elec_plant_in_service_ferc1\": elec_plant_in_service,\n}",
"_____no_output_____"
]
],
[
[
"# Check Data Quality",
"_____no_output_____"
],
[
"## Spot check dataframes",
"_____no_output_____"
]
],
[
[
"elec_plant_in_service.columns",
"_____no_output_____"
],
[
"elec_depreciation.columns",
"_____no_output_____"
],
[
"print(elec_plant_in_service.count())",
"_____no_output_____"
],
[
"df = pd.merge(elec_plant_in_service, elec_depreciation, left_index=True, right_index=True)",
"_____no_output_____"
],
[
"sns.scatterplot(x=\"total_electric_plant\", y=\"ending_balance\", data=df)",
"_____no_output_____"
],
[
"elec_income.columns",
"_____no_output_____"
],
[
"elec_income.sample(10)",
"_____no_output_____"
],
[
"df = elec_income.reset_index()\ndf[\"net_income_calculated\"] = df.operating_revenues_acct400 - df.utility_operating_expenses_total\nincome_totals = [\n \"operating_revenues_acct400\",\n \"utility_operating_expenses_total\",\n \"net_utility_operating_income\",\n \"net_income_calculated\",\n]\nfor col in income_totals:\n sns.lineplot(x=\"report_year\", y=col, data=df, estimator=\"sum\", label=col)\nplt.ylabel(\"USD\")\nplt.legend()\nplt.show();",
"_____no_output_____"
],
[
"sns.scatterplot(x=\"net_utility_operating_income\", y=\"net_income_calculated\", data=df)",
"_____no_output_____"
],
[
"df = elec_sales.reset_index()\nmwh_cols = df.filter(regex=\".*acct.*mwh$\").columns\ndf = pudl.transform.ferc1.oob_to_nan(df, mwh_cols, ub=1e9)\ncust_cols = df.filter(regex=\".*acct.*customers$\").columns\ndf.loc[:,cust_cols] = df.loc[:,cust_cols].astype(float)\nrev_cols = df.filter(regex=\".*acct.*revenue$\").columns",
"_____no_output_____"
],
[
"mwh_cols",
"_____no_output_____"
],
[
"for var in mwh_cols:\n sns.lineplot(x=\"report_year\", y=var, data=df, estimator=\"mean\", label=var)\n plt.ylabel(\"Electricity Sold [MWh]\")\n plt.xlabel(None)\n plt.legend()\n plt.show()\n",
"_____no_output_____"
],
[
"for var in cust_cols:\n sns.lineplot(x=\"report_year\", y=var, data=df, estimator=\"mean\", label=var)\n plt.ylabel(\"Number of Customers\")\n plt.xlabel(None)\n plt.legend()\n plt.show()\n",
"_____no_output_____"
],
[
"df.query(\"respondent_id==151\")",
"_____no_output_____"
],
[
"sns.lineplot(x=\"report_year\", y=\"sales_acct447_for_resale_customers\", data=df, units=\"respondent_id\", estimator=None)",
"_____no_output_____"
],
[
"for var in rev_cols:\n sns.lineplot(x=\"report_year\", y=var, data=df, estimator=\"mean\", label=var)\nplt.ylabel(\"Electricity Revenues [USD]\")\nplt.xlabel(None)\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"mwh_cols",
"_____no_output_____"
],
[
"sns.scatterplot(x=\"sales_acct440_residential_mwh\", y=\"sales_acct440_residential_revenue\", data=df, alpha=0.1, label=\"Residential\")\nplt.show()\nsns.scatterplot(x=\"sales_acct442_commercial_industrial_small_mwh\", y=\"sales_acct442_commercial_industrial_small_revenue\", data=df, alpha=0.1, label=\"Small C&I\")\nplt.show()\nsns.scatterplot(x=\"sales_acct442_commercial_industrial_large_mwh\", y=\"sales_acct442_commercial_industrial_large_revenue\", data=df, alpha=0.1, label=\"Large C&I\")\nplt.show()",
"_____no_output_____"
],
[
"df = elec_sales.reset_index()\nfor var in df.filter(regex=\".*acct.*customers$\").columns:\n sns.lineplot(x=\"report_year\", y=var, data=df, estimator=\"sum\", label=var)\n\nplt.ylabel(\"Customers\")\nplt.xlabel(None)\nplt.legend()",
"_____no_output_____"
],
[
"df = elec_oandm.reset_index()\nfor var in df.filter(regex=\"^production_.*\").columns:\n sns.lineplot(x=\"report_year\", y=var, data=df, estimator=\"sum\", label=var.split('_')[1])\nplt.ylabel(\"[USD]\")\nplt.xlabel(None)\nplt.title(\"Total Fuel / Purchased Power Costs\")\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"sns.lineplot(x=\"report_year\", y=\"production_steam_ops_acct501_fuel\", data=df, units=\"respondent_id\", estimator=None, alpha=0.1)\nplt.show()\nsns.lineplot(x=\"report_year\", y=\"production_nuclear_ops_acct518_fuel\", data=df, units=\"respondent_id\", estimator=None, alpha=0.1)\nplt.show()\nsns.lineplot(x=\"report_year\", y=\"production_other_ops_acct547_fuel\", data=df, units=\"respondent_id\", estimator=None, alpha=0.1)\nplt.show()\nsns.lineplot(x=\"report_year\", y=\"production_supply_acct555_purchased_power\", data=df, units=\"respondent_id\", estimator=None, alpha=0.1)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Prepare Data for Output",
"_____no_output_____"
],
[
"## Select requested utilities\n* Use Binz' target list to select a subset of the tables/columns.\n* Add missing & unmapped `utility_id_ferc1` values to the (FERC) target list:\n - **`eia 22500`** : `ferc1 191, 276` (Westar Energy)\n - **`eia 13780`** : `ferc1 121` (Northern States Power Company - WI)\n - **`eia 13809, 13902`** : `ferc1 122` (Northwestern Public Service Co)\n* Merge in additional utility name/ID fields for readability.",
"_____no_output_____"
]
],
[
[
"# Grab the EIA/FERC Utility IDs & Names:\nutilities_eia = pd.read_sql(\"utilities_eia\", pudl_engine)\nutilities_ferc1 = pd.read_sql(\"utilities_ferc1\", pudl_engine)\n\n# Get Binz' list of utilities based on EIA IDs:\nrbinz_eia_utils = pd.read_csv(\"rbinz_instructions.csv\", index_col=\"utility_id_eia\")\n\n# Infer FERC 1 Utility IDs for Binz' targets:\nutility_id_eia_targets = (\n rbinz_eia_utils\n .merge(utilities_eia, how=\"left\", on=\"utility_id_eia\", suffixes=(\"_rbinz\", \"_pudl\"))\n .astype({\"utility_id_pudl\": 'Int64'})\n .dropna(subset=[\"utility_id_pudl\"])\n .merge(utilities_ferc1, how=\"left\", on=\"utility_id_pudl\")\n .astype({\"utility_id_ferc1\": 'Int64'})\n .set_index(\"utility_id_eia\")\n .dropna(subset=[\"utility_id_ferc1\"])\n)\n\n# Add in a few FERC1 IDs that were missing:\nutility_id_ferc1_targets = set(utility_id_eia_targets.utility_id_ferc1).union({121, 122, 191, 276})",
"_____no_output_____"
],
[
"binz_out = {}\nfor k in ferc1_dfs.keys():\n binz_out[k]= (\n ferc1_dfs[k].reset_index()\n .rename(columns={\"respondent_id\": \"utility_id_ferc1\"})\n .query(\"utility_id_ferc1 in @utility_id_ferc1_targets\")\n .merge(utilities_ferc1, on=\"utility_id_ferc1\")\n .merge(utilities_eia[[\"utility_id_pudl\", \"utility_id_eia\"]], on=\"utility_id_pudl\")\n .set_index([\"utility_id_ferc1\", \"report_year\", \"utility_name_ferc1\", \"utility_id_eia\", \"utility_id_pudl\"])\n .reset_index()\n )\n print(f\"{len(binz_out[k])} records in {k}\")\n",
"_____no_output_____"
],
[
"all_target_eia_ids = set(utility_id_eia_targets.reset_index().utility_id_eia).union({22500, 13780, 13809, 13902})\nbinz_out[\"elec_sales_eia861\"] = (\n sales_eia861\n .merge(utilities_eia, on=\"utility_id_eia\", how=\"left\")\n .query(\"utility_id_eia in @all_target_eia_ids\")\n .merge(utilities_ferc1, on=\"utility_id_pudl\", how=\"left\")\n .set_index([\"utility_id_eia\", \"report_year\", \"utility_name_eia\", \"utility_id_pudl\", \"utility_id_ferc1\", \"utility_name_ferc1\", \"state\"])\n .reset_index()\n .astype({\"utility_id_pudl\": \"Int64\",\n \"utility_id_ferc1\": \"Int64\"})\n)",
"_____no_output_____"
],
[
"for df in binz_out:\n print(f\"Writing {df}.csv\")\n binz_out[df].to_csv(f\"{df}.csv\", index=False)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0d7aa41dafbdd17b8f21d3e0581d26627a96687 | 9,698 | ipynb | Jupyter Notebook | Data-Preprocessing/Stemming.ipynb | guru9573/NLP | 6cfad0b865fa893883881ac8d848d04afa189b68 | [
"MIT"
] | 1 | 2021-08-24T15:48:04.000Z | 2021-08-24T15:48:04.000Z | Data-Preprocessing/Stemming.ipynb | saishiva024/Natural-Language-Processing | 6cfad0b865fa893883881ac8d848d04afa189b68 | [
"MIT"
] | null | null | null | Data-Preprocessing/Stemming.ipynb | saishiva024/Natural-Language-Processing | 6cfad0b865fa893883881ac8d848d04afa189b68 | [
"MIT"
] | null | null | null | 25.387435 | 204 | 0.412972 | [
[
[
"## Stemming\n\n* Stemming is the process of removing suffix from word to obtain base or root word i.e., to reduce inflectional form of word to base word.\n* Stemming will chop-off ‘s’, ‘es’, ‘ed’, ‘ing’, ‘ly’ etc from the end of the words and sometimes the conversion is not desirable. But nonetheless, stemming helps us in standardizing text.",
"_____no_output_____"
],
[
"## nltk",
"_____no_output_____"
]
],
[
[
"import nltk\n\nfrom nltk.stem import PorterStemmer, SnowballStemmer, LancasterStemmer\nfrom nltk.tokenize import word_tokenize",
"_____no_output_____"
],
[
"nltk.download('punkt')",
"[nltk_data] Downloading package punkt to /root/nltk_data...\n[nltk_data] Package punkt is already up-to-date!\n"
]
],
[
[
"#### Porter Stemmer\n\nThis is one of the most common and gentle stemmer. It is very fast but not precise enough.\n",
"_____no_output_____"
]
],
[
[
"porter_stemmer = PorterStemmer()",
"_____no_output_____"
],
[
"def stem_words_porter(text):\n words = word_tokenize(text)\n\n stem_words = [stemmer.stem(word) for word in words]\n\n return stem_words",
"_____no_output_____"
],
[
"input_text = \"SpaceX is an American aerospace manufacturer, space transportation services and communications company headquartered in Hawthorne, California. It was established by Elon Musk\"\n\nstem_words_porter(input_text)",
"_____no_output_____"
]
],
[
[
"#### Snowball Stemmer\n\n* There were some improvements done on Porter Stemmer which made it more precise over large datasets\n\n* One feature of Snowball Stemmer is that it will ignore stemming of Stopwords",
"_____no_output_____"
]
],
[
[
"snowball_stemmer = SnowballStemmer(language=\"english\")",
"_____no_output_____"
],
[
"def stem_words_snowball(text):\n words = word_tokenize(text)\n\n stem_words = [snowball_stemmer.stem(word) for word in words]\n\n return stem_words",
"_____no_output_____"
],
[
"stem_words_snowball(input_text)",
"_____no_output_____"
]
],
[
[
"You can see \"was\" is handled well by Snowball Stemmer compared to PorterStemmer.",
"_____no_output_____"
],
[
"#### Lancaster Stemmer\n* This very aggressive Stemmer and will hugely trim down the vocabulary\n* It is fast but not quite advisable as the base word will not be much accurate",
"_____no_output_____"
]
],
[
[
"lancaster = LancasterStemmer()",
"_____no_output_____"
],
[
"def stem_words_lancaster(text):\n words = word_tokenize(text)\n\n stem_words = [lancaster.stem(word) for word in words]\n\n return stem_words",
"_____no_output_____"
],
[
"stem_words_lancaster(input_text)",
"_____no_output_____"
]
],
[
[
"### spacy\n\nIt might be surprising to you but spaCy doesn't contain any function for stemming as it relies on lemmatization only.",
"_____no_output_____"
],
[
"\nProblems with Stemming\n\n***Ex:*** Root word of **services** will be given as **servic** which is not correct as shown in example",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d0d7b27a5708f6114e739fc1a59b1662f65fa8cc | 18,885 | ipynb | Jupyter Notebook | frameworks/mxnet/get_started_mnist_deploy.ipynb | Amirosimani/amazon-sagemaker-examples | bc35e7a9da9e2258e77f98098254c2a8e308041a | [
"Apache-2.0"
] | 2,610 | 2020-10-01T14:14:53.000Z | 2022-03-31T18:02:31.000Z | frameworks/mxnet/get_started_mnist_deploy.ipynb | Amirosimani/amazon-sagemaker-examples | bc35e7a9da9e2258e77f98098254c2a8e308041a | [
"Apache-2.0"
] | 1,959 | 2020-09-30T20:22:42.000Z | 2022-03-31T23:58:37.000Z | frameworks/mxnet/get_started_mnist_deploy.ipynb | Amirosimani/amazon-sagemaker-examples | bc35e7a9da9e2258e77f98098254c2a8e308041a | [
"Apache-2.0"
] | 2,052 | 2020-09-30T22:11:46.000Z | 2022-03-31T23:02:51.000Z | 34.843173 | 270 | 0.60556 | [
[
[
"# Deploy a Trained MXNet Model\nIn this notebook, we walk through the process of deploying a trained model to a SageMaker endpoint. If you recently ran [the notebook for training](get_started_mnist_deploy.ipynb) with %store% magic, the `model_data` can be restored. Otherwise, we retrieve the \nmodel artifact from a public S3 bucket.",
"_____no_output_____"
]
],
[
[
"# setups\n\nimport os\nimport json\n\n\nimport boto3\nimport sagemaker\nfrom sagemaker.mxnet import MXNetModel\nfrom sagemaker import get_execution_role, Session\n\nsess = Session()\nrole = get_execution_role()\n\n%store -r mx_mnist_model_data\n\ntry:\n mx_mnist_model_data\nexcept NameError:\n import json\n\n # copy a pretrained model from a public public to your default bucket\n with open(\"code/config.json\", \"r\") as f:\n CONFIG = json.load(f)\n\n bucket = CONFIG[\"public_bucket\"]\n s3 = boto3.client(\"s3\")\n key = \"datasets/image/MNIST/model/mxnet-training-2020-11-21-01-38-01-009/model.tar.gz\"\n target = os.path.join(\"/tmp\", \"model.tar.gz\")\n\n s3.download_file(bucket, key, target)\n\n # upload to default bucket\n mx_mnist_model_data = sess.upload_data(\n path=os.path.join(\"/tmp\", \"model.tar.gz\"),\n bucket=sess.default_bucket(),\n key_prefix=\"model/mxnet\",\n )",
"_____no_output_____"
],
[
"print(mx_mnist_model_data)",
"_____no_output_____"
]
],
[
[
"## MXNet Model Object\nThe `MXNetModel` class allows you to define an environment for making inference using your\nmodel artifact. Like `MXNet` class we discussed [in this notebook for training an MXNet model](get_started_mnist_train.ipynb), it is high level API used to set up a docker image for your model hosting service.\n\nOnce it is properly configured, it can be used to create a SageMaker\nEndpoint on an EC2 instance. The SageMaker endpoint is a containerized environment that uses your trained model \nto make inference on incoming data via RESTful API calls. \n\nSome common parameters used to initiate the `MXNetModel` class are:\n- entry_point: A user defined python file to be used by the inference container as handlers of incoming requests\n- source_dir: The directory of the `entry_point`\n- role: An IAM role to make AWS service requests\n- model_data: the S3 bucket URI of the compressed model artifact. It can be a path to a local file if the endpoint is to be deployed on the SageMaker instance you are using to run this notebook (local mode)\n- framework_version: version of the MXNet package to be used\n- py_version: python version to be used\n\nWe elaborate on the `entry_point` below.",
"_____no_output_____"
]
],
[
[
"model = MXNetModel(\n entry_point=\"inference.py\",\n source_dir=\"code\",\n role=role,\n model_data=mx_mnist_model_data,\n framework_version=\"1.7.0\",\n py_version=\"py3\",\n)",
"_____no_output_____"
]
],
[
[
"### Entry Point for the Inference Image\n\nYour model artifacts pointed by `model_data` is pulled by the `MXNetModel` and it is decompressed and saved in\nin the docker image it defines. They become regular model checkpoint files that you would produce outside SageMaker. This means in order to use your trained model for serving, \nyou need to tell `MXNetModel` class how to a recover a MXNet model from the static checkpoint.\n\nAlso, the deployed endpoint interacts with RESTful API calls, you need to tell it how to parse an incoming \nrequest to your model. \n\nThese two instructions needs to be defined as two functions in the python file pointed by `entry_point`.\n\nBy convention, we name this entry point file `inference.py` and we put it in the `code` directory.\n\nTo tell the inference image how to load the model checkpoint, you need to implement a function called \n`model_fn`. This function takes one positional argument \n\n- `model_dir`: the directory of the static model checkpoints in the inference image.\n\nThe return of `model_fn` is an MXNet model. In this example, the `model_fn`\nlooks like:\n\n```python\ndef model_fn(model_dir):\n \"\"\"Load the gluon model. Called once when hosting service starts.\n\n :param: model_dir The directory where model files are stored.\n :return: a model (in this case a Gluon network)\n \"\"\"\n net = gluon.SymbolBlock.imports(\n symbol_file=os.path.join(model_dir, 'compiled-symbol.json'),\n input_names=['data'],\n param_file=os.path.join(model_dir, 'compiled-0000.params'))\n return net\n```\n\nNext, you need to tell the hosting service how to handle the incoming data. This includes:\n\n* How to parse the incoming request\n* How to use the trained model to make inference\n* How to return the prediction to the caller of the service\n\n\nYou do it by implementing a function\ncalled `transform_fn`. This function takes 4 positional arguments:\n\n- `net`: the return from `model_fn`\n- `data`: the payload of the incoming request\n- `content_type`: the content type of the incoming request\n- `accept_type`: the conetent type of the response\n\nIn this example, the `transform_fn` looks like:\n```python\n\ndef transform_fn(net, data, input_content_type, output_content_type):\n assert input_content_type=='application/json'\n assert output_content_type=='application/json' \n\n # parsed should be a 1d array of length 728\n parsed = json.loads(data)\n parsed = parsed['inputs'] \n \n # convert to numpy array\n arr = np.array(parsed).reshape(-1, 1, 28, 28)\n \n # convert to mxnet ndarray\n nda = mx.nd.array(arr)\n\n output = net(nda)\n \n prediction = mx.nd.argmax(output, axis=1)\n response_body = json.dumps(prediction.asnumpy().tolist())\n\n return response_body, output_content_type\n```\n\nThe `content_type` is used by the function to parse the `data`. \nIn the following example, the functions requires the\ncontent type of the payload to be a json string and it\nparses the json string into a python dictionary by `json.loads`.\nMoreover, it assumes the parsed dictionary contains a key `inputs`\nthat maps to the input data to be consumed by the model. \nIt also assumes the input data is a flattened 1D array representation\nthat can be reshaped into a numpy array of shape (-1, 1, 28, 28).\nThe input images of a MXNet model follows NCHW convention. \nIt also assumes the input data is already normalized and can be readily\nconsumed by the neural network.\n\nAfter the inference, the function uses `accept_type` to encode the \nprediction into the content type of the response. In this example,\nthe function requires the caller of the service to accept json string.\n\nThe return of `transform_fn` is always a tuple of encoded response body\nand the content type to be accepted by the caller. ",
"_____no_output_____"
],
[
"## Execute the inference container\nOnce the `MXNetModel` class is initiated, we can call its `deploy` method to run the container for the hosting\nservice. Some common parameters needed to call `deploy` methods are:\n\n- initial_instance_count: the number of SageMaker instances to be used to run the hosting service.\n- instance_type: the type of SageMaker instance to run the hosting service. Set it to `local` if you want run the hosting service on the local SageMaker instance. Local mode are typically used for debugging. \n- serializer: A python callable used to serialize (encode) the request data.\n- deserializer: A python callable used to deserialize (decode) the response data.\n\nCommonly used serializers and deserialzers are implemented in `sagemaker.serializers` and `sagemaker.deserializer`\nsubmodules of the SageMaker Python SDK. \n\nSince in the `transform_fn` we declared that the incoming requests are json-encoded, we need use a json serializer,\nto encode the incoming data into a json string. Also, we declared the return content type to be json string, we\nneed to use a json deserializer to parse the response into a an (in this case, an \ninteger represeting the predicted hand-written digit). \n\n<span style=\"color:red\"> Note: local mode is not supported in SageMaker Studio </span>",
"_____no_output_____"
]
],
[
[
"from sagemaker.serializers import JSONSerializer\nfrom sagemaker.deserializers import JSONDeserializer\n\n# set local_mode to False if you want to deploy on a remote\n# SageMaker instance\n\nlocal_mode = False\n\nif local_mode:\n instance_type = \"local\"\nelse:\n instance_type = \"ml.c4.xlarge\"\n\npredictor = model.deploy(\n initial_instance_count=1,\n instance_type=instance_type,\n serializer=JSONSerializer(),\n deserializer=JSONDeserializer(),\n)",
"_____no_output_____"
]
],
[
[
"The `predictor` we get above can be used to make prediction requests agaist a SageMaker endpoint. For more\ninformation, check [the api reference for SageMaker Predictor](\nhttps://sagemaker.readthedocs.io/en/stable/api/inference/predictors.html#sagemaker.predictor.Predictor)\n\nNow, let's test the endpoint with some dummy data. ",
"_____no_output_____"
]
],
[
[
"import random\n\ndummy_data = {\"inputs\": [random.random() for _ in range(784)]}",
"_____no_output_____"
]
],
[
[
"In `transform_fn`, we declared that the parsed data is a python dictionary with a key `inputs` and its value should \nbe a 1D array of length 784. Hence, the definition of `dummy_data`. ",
"_____no_output_____"
]
],
[
[
"res = predictor.predict(dummy_data)",
"_____no_output_____"
],
[
"print(\"Predicted digit:\", *map(int, res))",
"_____no_output_____"
]
],
[
[
"If the input data does not look exactly like `dummy_data`, the endpoint will raise an exception. This is because \nof the stringent way we defined the `transform_fn`. Let's test the following example.",
"_____no_output_____"
]
],
[
[
"dummy_data = [random.random() for _ in range(784)]",
"_____no_output_____"
]
],
[
[
"When the `dummy_data` is parsed in `transform_fn`, it does not have an `inputs` field, so `transform_fn` will crush. ",
"_____no_output_____"
]
],
[
[
"# uncomment the following line to make inference on incorrectly formated input data\n# res = predictor.predict(dummy_data)",
"_____no_output_____"
]
],
[
[
"Now, let's use real MNIST test to test the endpoint. We use helper functions defined in `code.utils` to \ndownload MNIST data set and normalize the input data.",
"_____no_output_____"
]
],
[
[
"import random\nimport boto3\nimport matplotlib.pyplot as plt\nimport os\nimport numpy as np\nimport gzip\nimport json\n\n%matplotlib inline\n\n# Donwload MNIST test set from a public bucket\nwith open(\"code/config.json\", \"rb\") as f:\n CONFIG = json.load(f)\n\nfname = \"t10k-images-idx3-ubyte.gz\"\nbucket = CONFIG[\"public_bucket\"]\nkey = \"datasets/image/MNIST/\" + fname\ntarget = os.path.join(\"/tmp\", fname)\n\ns3 = boto3.client(\"s3\")\nif not os.path.exists(target):\n s3.download_file(bucket, key, target)\n\n# parse to numpy\nwith gzip.open(target, \"rb\") as f:\n images = np.frombuffer(f.read(), np.uint8, offset=16).reshape(-1, 28, 28)\n\n\n# randomly sample 16 images to inspect\nmask = random.sample(range(images.shape[0]), 16)\nsamples = images[mask]\n\n# plot the images\nfig, axs = plt.subplots(nrows=1, ncols=16, figsize=(16, 1))\n\nfor i, splt in enumerate(axs):\n splt.imshow(samples[i])",
"_____no_output_____"
]
],
[
[
"First, let us use the model to infer the samples one-by-one. This is the typical use case\nfor an online application.",
"_____no_output_____"
]
],
[
[
"# convert to float and normalize normalize the input\n\n\ndef normalize(x, axis):\n eps = np.finfo(float).eps\n mean = np.mean(x, axis=axis, keepdims=True)\n # avoid division by zero\n std = np.std(x, axis=axis, keepdims=True) + eps\n return (x - mean) / std\n\n\nsamples = normalize(samples.astype(np.float32), axis=(1, 2)) # mean 0; std 1\n\nres = []\nfor img in samples:\n data = {\"inputs\": img.flatten().tolist()}\n res.append(predictor.predict(data)[0])",
"_____no_output_____"
],
[
"print(\"Predictions: \", *map(int, res))",
"_____no_output_____"
]
],
[
[
"Since in `transform_fn`, the parsed numpy array could have take on any value for its batch\ndimension, we can send the entire `samples` at once and let the model do a batch inference.",
"_____no_output_____"
]
],
[
[
"data = {\"inputs\": samples.tolist()}\nres = predictor.predict(data)",
"_____no_output_____"
],
[
"print(\"Predictions: \", *map(int, res))",
"_____no_output_____"
]
],
[
[
"## Test and debug the entry point before deployment\n\nWhen deploying a model to a SageMaker endpoint, it is a good practice to test the entry \npoint. The following snippet shows you how you can test and debug the `model_fn` and \n`transform_fn` you implemented in the entry point for the inference image. ",
"_____no_output_____"
]
],
[
[
"!pygmentize code/test_inference.py",
"_____no_output_____"
]
],
[
[
"The `test` function simulates how the inference container works. It pulls the model\nartifact and loads the model into memory by calling `model_fn` and parse it with `model_dir`. When it receives a request, it calls `transform_fn` and parse it with the loaded model, the payload of the request, request content type and response content type. \n\nImplementing such a test function helps you debugging the entry point before put it into the production. If `test` runs correctly, then you can be certain that if the incoming\ndata and its content type are what they suppose to be, then the endpoint point is going\nto work as expected. ",
"_____no_output_____"
],
[
"## (Optional) Clean up \n\nIf you do not plan to use the endpoint, you should delete it to free up some computation \nresource. If you use local, you will need to manually delete the docker container bounded\nat port 8080 (the port that listens to the incoming request).\n",
"_____no_output_____"
]
],
[
[
"import os\n\nif not local_mode:\n predictor.delete_endpoint()\nelse:\n # detach the inference container from port 8080 (in local mode)\n os.system(\"docker container ls | grep 8080 | awk '{print $1}' | xargs docker container rm -f\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
d0d7ba49757632bb9d6c7346742dc9e70152efb8 | 8,678 | ipynb | Jupyter Notebook | projects/config/Experiment.ipynb | Matheus158257/projects | 26a6148046533476e625a872a2950c383aa975a8 | [
"Apache-2.0"
] | null | null | null | projects/config/Experiment.ipynb | Matheus158257/projects | 26a6148046533476e625a872a2950c383aa975a8 | [
"Apache-2.0"
] | null | null | null | projects/config/Experiment.ipynb | Matheus158257/projects | 26a6148046533476e625a872a2950c383aa975a8 | [
"Apache-2.0"
] | null | null | null | 39.990783 | 1,888 | 0.720442 | [
[
[
"# Nova Tarefa - Experimento\n\nPreencha aqui com detalhes sobre a tarefa.<br>\n### **Em caso de dúvidas, consulte os [tutoriais da PlatIAgro](https://platiagro.github.io/tutorials/).**",
"_____no_output_____"
],
[
"## Declaração de parâmetros e hiperparâmetros\n\nDeclare parâmetros com o botão <img src=\"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAYAAACNiR0NAAABhWlDQ1BJQ0MgcHJvZmlsZQAAKJF9kT1Iw0AcxV9TtaIVBzuIOASpThb8QhylikWwUNoKrTqYXPohNGlIUlwcBdeCgx+LVQcXZ10dXAVB8APEydFJ0UVK/F9SaBHjwXE/3t173L0DhFqJqWbbGKBqlpGMRcVMdkUMvKID3QhiCOMSM/V4aiENz/F1Dx9f7yI8y/vcn6NHyZkM8InEs0w3LOJ14ulNS+e8TxxiRUkhPiceNeiCxI9cl11+41xwWOCZISOdnCMOEYuFFpZbmBUNlXiKOKyoGuULGZcVzluc1VKFNe7JXxjMacsprtMcRAyLiCMBETIq2EAJFiK0aqSYSNJ+1MM/4PgT5JLJtQFGjnmUoUJy/OB/8LtbMz854SYFo0D7i21/DAOBXaBete3vY9uunwD+Z+BKa/rLNWDmk/RqUwsfAb3bwMV1U5P3gMsdoP9JlwzJkfw0hXweeD+jb8oCfbdA16rbW2Mfpw9AmrpaugEODoGRAmWveby7s7W3f880+vsBocZyukMJsmwAAAAGYktHRAD/AP8A/6C9p5MAAAAJcEhZcwAADdcAAA3XAUIom3gAAAAHdElNRQfkBgsMIwnXL7c0AAACDUlEQVQ4y92UP4gTQRTGf29zJxhJZ2NxbMBKziYWlmJ/ile44Nlkd+dIYWFzItiNgoIEtFaTzF5Ac/inE/urtLWxsMqmUOwCEpt1Zmw2xxKi53XitPO9H9978+aDf/3IUQvSNG0450Yi0jXG7C/eB0cFeu9viciGiDyNoqh2KFBrHSilWstgnU7nFLBTgl+ur6/7PwK11kGe5z3n3Hul1MaiuCgKDZwALHA7z/Oe1jpYCtRaB+PxuA8kQM1aW68Kt7e3zwBp6a5b1ibj8bhfhQYVZwMRiQHrvW9nWfaqCrTWPgRWvPdvsiy7IyLXgEJE4slk8nw+T5nDgDbwE9gyxryuwpRSF5xz+0BhrT07HA4/AyRJchUYASvAbhiGaRVWLIMBYq3tAojIszkMoNRulbXtPM8HwV/sXSQi54HvQRDcO0wfhGGYArvAKjAq2wAgiqJj3vsHpbtur9f7Vi2utLx60LLW2hljEuBJOYu9OI6vAzQajRvAaeBLURSPlsBelA+VhWGYaq3dwaZvbm6+m06noYicE5ErrVbrK3AXqHvvd4bD4Ye5No7jSERGwKr3Pms2m0pr7Rb30DWbTQWYcnFvAieBT7PZbFB1V6vVfpQaU4UtDQetdTCZTC557/eA48BlY8zbRZ1SqrW2tvaxCvtt2iRJ0i9/xb4x5uJRwmNlaaaJ3AfqIvKY/+78Av++6uiSZhYMAAAAAElFTkSuQmCC\" /> na barra de ferramentas.<br>\nO parâmetro `dataset` identifica os conjuntos de dados. Você pode importar arquivos de dataset com o botão <img src=\"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAYAAACNiR0NAAABhWlDQ1BJQ0MgcHJvZmlsZQAAKJF9kT1Iw0AcxV9TtaIVBzuIOASpThb8QhylikWwUNoKrTqYXPohNGlIUlwcBdeCgx+LVQcXZ10dXAVB8APEydFJ0UVK/F9SaBHjwXE/3t173L0DhFqJqWbbGKBqlpGMRcVMdkUMvKID3QhiCOMSM/V4aiENz/F1Dx9f7yI8y/vcn6NHyZkM8InEs0w3LOJ14ulNS+e8TxxiRUkhPiceNeiCxI9cl11+41xwWOCZISOdnCMOEYuFFpZbmBUNlXiKOKyoGuULGZcVzluc1VKFNe7JXxjMacsprtMcRAyLiCMBETIq2EAJFiK0aqSYSNJ+1MM/4PgT5JLJtQFGjnmUoUJy/OB/8LtbMz854SYFo0D7i21/DAOBXaBete3vY9uunwD+Z+BKa/rLNWDmk/RqUwsfAb3bwMV1U5P3gMsdoP9JlwzJkfw0hXweeD+jb8oCfbdA16rbW2Mfpw9AmrpaugEODoGRAmWveby7s7W3f880+vsBocZyukMJsmwAAAAGYktHRAD/AP8A/6C9p5MAAAAJcEhZcwAADdcAAA3XAUIom3gAAAAHdElNRQfkBgsOBy6ASTeXAAAC/0lEQVQ4y5WUT2gcdRTHP29m99B23Uiq6dZisgoWCxVJW0oL9dqLfyhCvGWY2YUBI95MsXgwFISirQcLhS5hfgk5CF3wJIhFI7aHNsL2VFZFik1jS1qkiZKdTTKZ3/MyDWuz0fQLc/m99/vMvDfv+4RMlUrlkKqeAAaBAWAP8DSgwJ/AXRG5rao/WWsvTU5O3qKLBMD3fSMiPluXFZEPoyj67PGAMzw83PeEMABHVT/oGpiamnoAmCcEWhH5tFsgF4bh9oWFhfeKxeJ5a+0JVT0oImWgBPQCKfAQuAvcBq67rltX1b+6ApMkKRcKhe9V9QLwbavV+qRer692Sx4ZGSnEcXw0TdP3gSrQswGYz+d/S5IkVtXTwOlCoZAGQXAfmAdagAvsAErtdnuXiDy6+023l7qNRsMODg5+CawBzwB9wFPA7mx8ns/KL2Tl3xCRz5eWlkabzebahrHxPG+v4zgnc7ncufHx8Z+Hhoa29fT0lNM03Q30ikiqqg+ttX/EcTy3WTvWgdVqtddaOw/kgXvADHBHROZVNRaRvKruUNU+EdkPfGWM+WJTYOaSt1T1LPDS/4zLWWPMaLVaPWytrYvIaBRFl/4F9H2/JCKvGmMu+76/X0QOqGoZKDmOs1NV28AicMsYc97zvFdc1/0hG6kEeNsY83UnsCwivwM3VfU7YEZE7lhr74tIK8tbnJiYWPY8b6/ruleAXR0ftQy8boyZXi85CIIICDYpc2ZgYODY3NzcHmvt1eyvP64lETkeRdE1yZyixWLx5U2c8q4x5mIQBE1g33/0d3FlZeXFR06ZttZesNZejuO4q1NE5CPgWVV9E3ij47wB1IDlJEn+ljAM86urq7+KyAtZTgqsO0VV247jnOnv7/9xbGzMViqVMVX9uANYj6LonfVtU6vVkjRNj6jqGeCXzGrPAQeA10TkuKpOz87ONrayhnIA2Qo7BZwKw3B7kiRloKSqO13Xja21C47jPNgysFO1Wi0GmtmzQap6DWgD24A1Vb3SGf8Hfstmz1CuXEIAAAAASUVORK5CYII=\" /> na barra de ferramentas.",
"_____no_output_____"
]
],
[
[
"dataset = \"\" #@param {type:\"string\"}",
"_____no_output_____"
]
],
[
[
"## Acesso ao conjunto de dados\n\nO conjunto de dados utilizado nesta etapa será o mesmo carregado através da plataforma.<br>\nO tipo da variável retornada depende do arquivo de origem:\n- [pandas.DataFrame](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) para CSV e compressed CSV: .csv .csv.zip .csv.gz .csv.bz2 .csv.xz\n- [Binary IO stream](https://docs.python.org/3/library/io.html#binary-i-o) para outros tipos de arquivo: .jpg .wav .zip .h5 .parquet etc",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\ndata = pd.read_csv(f'/tmp/data/{dataset}')\ndata",
"_____no_output_____"
]
],
[
[
"## Acesso aos metadados do conjunto de dados\n\nUtiliza a função `stat_dataset` do [SDK da PlatIAgro](https://platiagro.github.io/sdk/) para carregar metadados.<br>\nPor exemplo, arquivos CSV possuem `metadata['featuretypes']` para cada coluna no conjunto de dados (ex: categorical, numerical, or datetime).",
"_____no_output_____"
]
],
[
[
"from platiagro import stat_dataset\n\nmetadata = stat_dataset(name=dataset)\nmetadata",
"_____no_output_____"
]
],
[
[
"## Conteúdo da tarefa",
"_____no_output_____"
]
],
[
[
"# adicione seu código aqui...",
"_____no_output_____"
]
],
[
[
"## Salva alterações no conjunto de dados\n\nO conjunto de dados será salvo (e sobrescrito com as respectivas mudanças) localmente, no container da experimentação, utilizando a função `pandas.DataFrame.to_csv`.<br>",
"_____no_output_____"
]
],
[
[
"df.to_csv(f'/tmp/data/{dataset}', index=False)",
"_____no_output_____"
]
],
[
[
"## Salva métricas\n\nUtiliza a função `save_metrics` do [SDK da PlatIAgro](https://platiagro.github.io/sdk/) para salvar métricas. Por exemplo: `accuracy`, `precision`, `r2_score`, `custom_score` etc.<br>",
"_____no_output_____"
]
],
[
[
"from platiagro import save_metrics\n\nsave_metrics(accuracy=0.5, custom_score=1000)",
"_____no_output_____"
]
],
[
[
"## Salva figuras\n\nUtiliza a função `save_figures` do [SDK da PlatIAgro](https://platiagro.github.io/sdk/) para salvar figuras do [matplotlib](https://matplotlib.org/3.2.1/gallery/index.html).",
"_____no_output_____"
]
],
[
[
"from platiagro import save_figures\n\nsave_figures(figure=matplotfig)",
"_____no_output_____"
]
],
[
[
"## Salva modelo e outros artefatos\n\nUtiliza a função `save_model` do [SDK da PlatIAgro](https://platiagro.github.io/sdk/) para salvar modelos e outros artefatos.<br>\nEssa função torna estes artefatos disponíveis para o notebook de implantação.",
"_____no_output_____"
]
],
[
[
"from platiagro import save_model\n\nsave_model(model=model, other_artifact={\"key\": \"value\"})",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0d7bfe382dd103cb19201a51be088c47e3e16f3 | 16,709 | ipynb | Jupyter Notebook | mission_to_mars.ipynb | eanord4/scraping-challenge | d08bd13b8e0e987d4360681a2c3008c2ad8dc493 | [
"ADSL"
] | null | null | null | mission_to_mars.ipynb | eanord4/scraping-challenge | d08bd13b8e0e987d4360681a2c3008c2ad8dc493 | [
"ADSL"
] | null | null | null | mission_to_mars.ipynb | eanord4/scraping-challenge | d08bd13b8e0e987d4360681a2c3008c2ad8dc493 | [
"ADSL"
] | null | null | null | 28.908304 | 169 | 0.430127 | [
[
[
"# 12-Web-Scraping-and-Document-Databases\nEric Nordstrom",
"_____no_output_____"
],
[
"### Setup",
"_____no_output_____"
]
],
[
[
"# dependencies\nfrom selenium import webdriver\nfrom bs4 import BeautifulSoup as BS\nimport requests\nimport pandas as pd\n\n# set up selenium driver\ndriver = webdriver.Firefox()",
"_____no_output_____"
]
],
[
[
"### NASA Mars News",
"_____no_output_____"
]
],
[
[
"# get html to parse\nurl = \"https://mars.nasa.gov/news\"\ndriver.get(url)\nsoup = BS(driver.page_source, \"html.parser\")\n\n# parse html\nitem = soup.find('li', class_=\"slide\")\ndate = item.find('div', class_=\"list_date\").text\ntitle_a = item.find('div', class_=\"content_title\").a\ntitle = title_a.text\nhref = title_a['href']\npara = item.find('div', class_=\"article_teaser_body\").text\n\n# display results\nprint(date)\nprint(title)\nprint()\nprint(para)\nprint(\"\\nMore:\", \"https://mars.nasa.gov\" + href)",
"February 27, 2020\nThe MarCO Mission Comes to an End\n\nThe pair of briefcase-sized satellites made history when they sailed past Mars in 2019.\n\nMore: https://mars.nasa.gov/news/8408/the-marco-mission-comes-to-an-end/\n"
]
],
[
[
"### JPL Mars Space Images - Featured Image",
"_____no_output_____"
]
],
[
[
"# get html to parse\nurl = \"https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars\"\nr = requests.get(url)\nassert(r.status_code == 200)\nsoup = BS(r.text, \"html.parser\")\n\n# parse html\nbutton_a = soup.find('a', id=\"full_image\")\nfeatured_image_url = \"https://www.jpl.nasa.gov\" + button_a['data-fancybox-href']\ntitle = button_a['data-title']\ndesc = button_a['data-description']\n\n# display results\nprint(title, desc, \"Image:\", sep=\"\\n\\n\", end=\" \")\nprint(featured_image_url)",
"Curiosity Self-Portrait at 'Big Sky' Drilling Site\n\nThis self-portrait of NASA's Curiosity Mars rover shows the vehicle at the 'Big Sky' site, where its drill collected the mission's fifth taste of Mount Sharp.\n\nImage: https://www.jpl.nasa.gov/spaceimages/images/mediumsize/PIA19920_ip.jpg\n"
]
],
[
[
"### Mars Weather",
"_____no_output_____"
]
],
[
[
"# get html to parse\nurl = \"https://twitter.com/marswxreport?lang=en\"\nr = requests.get(url)\nassert(r.status_code == 200)\nsoup = BS(r.text, \"html.parser\")\n\n# parse html\n# for some reason this shows up as a `span` via the inspector, but something\n# goes wrong via requests and even selenium. the 'p' tag below was found via\n# <str.find> on the request html but does not appear via the inspector.\np = soup.find('p', class_=\"tweet-text\")\nmars_weather = p.text.split(\"pic.twitter.com/\")[0]\n\n# display results\nprint(mars_weather)",
"InSight sol 448 (2020-02-29) low -94.1ºC (-137.3ºF) high -8.3ºC (17.0ºF)\nwinds from the SSW at 5.5 m/s (12.4 mph) gusting to 19.9 m/s (44.6 mph)\npressure at 6.30 hPa\n"
]
],
[
[
"### Mars Facts",
"_____no_output_____"
]
],
[
[
"#get html to parse\nurl = \"http://space-facts.com/mars/\"\nr = requests.get(url)\nassert(r.status_code == 200)\n\n# parse html\n# \"HTML table string\"? i think just a data frame makes sense?\ntables = pd.read_html(r.text)\n\n# display results\ntables[0]",
"_____no_output_____"
],
[
"tables[1]",
"_____no_output_____"
],
[
"# assign variables\nmars_facts = tables[0].rename(columns={0: \"Property\", 1: \"Value\"}).set_index(\"Property\").to_html()\nearth_comparison = tables[1].rename(columns={\"Mars - Earth Comparison\": \"Property\"}).set_index(\"Property\").to_html()\n\n# display results\nprint(mars_facts)\nprint()\nprint(earth_comparison)",
"<table border=\"1\" class=\"dataframe\">\n <thead>\n <tr style=\"text-align: right;\">\n <th></th>\n <th>Value</th>\n </tr>\n <tr>\n <th>Property</th>\n <th></th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>Equatorial Diameter:</th>\n <td>6,792 km</td>\n </tr>\n <tr>\n <th>Polar Diameter:</th>\n <td>6,752 km</td>\n </tr>\n <tr>\n <th>Mass:</th>\n <td>6.39 × 10^23 kg (0.11 Earths)</td>\n </tr>\n <tr>\n <th>Moons:</th>\n <td>2 (Phobos & Deimos)</td>\n </tr>\n <tr>\n <th>Orbit Distance:</th>\n <td>227,943,824 km (1.38 AU)</td>\n </tr>\n <tr>\n <th>Orbit Period:</th>\n <td>687 days (1.9 years)</td>\n </tr>\n <tr>\n <th>Surface Temperature:</th>\n <td>-87 to -5 °C</td>\n </tr>\n <tr>\n <th>First Record:</th>\n <td>2nd millennium BC</td>\n </tr>\n <tr>\n <th>Recorded By:</th>\n <td>Egyptian astronomers</td>\n </tr>\n </tbody>\n</table>\n\n<table border=\"1\" class=\"dataframe\">\n <thead>\n <tr style=\"text-align: right;\">\n <th></th>\n <th>Mars</th>\n <th>Earth</th>\n </tr>\n <tr>\n <th>Property</th>\n <th></th>\n <th></th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>Diameter:</th>\n <td>6,779 km</td>\n <td>12,742 km</td>\n </tr>\n <tr>\n <th>Mass:</th>\n <td>6.39 × 10^23 kg</td>\n <td>5.97 × 10^24 kg</td>\n </tr>\n <tr>\n <th>Moons:</th>\n <td>2</td>\n <td>1</td>\n </tr>\n <tr>\n <th>Distance from Sun:</th>\n <td>227,943,824 km</td>\n <td>149,598,262 km</td>\n </tr>\n <tr>\n <th>Length of Year:</th>\n <td>687 Earth days</td>\n <td>365.24 days</td>\n </tr>\n <tr>\n <th>Temperature:</th>\n <td>-153 to 20 °C</td>\n <td>-88 to 58°C</td>\n </tr>\n </tbody>\n</table>\n"
]
],
[
[
"### Mars Hemispheres",
"_____no_output_____"
]
],
[
[
"# get html to parse\nurl = \"https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars\"\ndriver.get(url)\n\n# parse html on main page\nimgs = {} # initially the urls of the images pages, then replaced with actual image urls\nfor a in driver.find_elements_by_tag_name('a'):\n if a.get_attribute('class') == \"itemLink product-item\" and a.find_elements_by_tag_name('h3'):\n imgs[a.text] = a.get_attribute('href')\n \n# parse html on each image page\nfor key, value in imgs.items():\n \n driver.get(value)\n \n for img in driver.find_elements_by_tag_name('img'):\n if img.get_attribute('class') == \"wide-image\":\n imgs[key] = img.get_attribute('src')\n break\n\n# display results\nimgs",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0d7d1ab03969a028c56b7691bfdb9ddbd687582 | 31,970 | ipynb | Jupyter Notebook | docs/_downloads/e72ffa33b709a3d3f5b3f4c195c07637/aws_distributed_training_tutorial.ipynb | leejh1230/PyTorch-tutorials-kr | ebbf44b863ff96c597631e28fc194eafa590c9eb | [
"BSD-3-Clause"
] | 1 | 2019-12-05T05:16:44.000Z | 2019-12-05T05:16:44.000Z | docs/_downloads/e72ffa33b709a3d3f5b3f4c195c07637/aws_distributed_training_tutorial.ipynb | leejh1230/PyTorch-tutorials-kr | ebbf44b863ff96c597631e28fc194eafa590c9eb | [
"BSD-3-Clause"
] | null | null | null | docs/_downloads/e72ffa33b709a3d3f5b3f4c195c07637/aws_distributed_training_tutorial.ipynb | leejh1230/PyTorch-tutorials-kr | ebbf44b863ff96c597631e28fc194eafa590c9eb | [
"BSD-3-Clause"
] | null | null | null | 133.208333 | 3,688 | 0.691304 | [
[
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"\nPyTorch 1.0 Distributed Trainer with Amazon AWS\n===============================================\n\n**Author**: `Nathan Inkawhich <https://github.com/inkawhich>`_\n\n**Edited by**: `Teng Li <https://github.com/teng-li>`_\n",
"_____no_output_____"
],
[
"In this tutorial we will show how to setup, code, and run a PyTorch 1.0\ndistributed trainer across two multi-gpu Amazon AWS nodes. We will start\nwith describing the AWS setup, then the PyTorch environment\nconfiguration, and finally the code for the distributed trainer.\nHopefully you will find that there is actually very little code change\nrequired to extend your current training code to a distributed\napplication, and most of the work is in the one-time environment setup.\n\n\n",
"_____no_output_____"
],
[
"Amazon AWS Setup\n----------------\n\nIn this tutorial we will run distributed training across two multi-gpu\nnodes. In this section we will first cover how to create the nodes, then\nhow to setup the security group so the nodes can communicate with\neachother.\n\nCreating the Nodes\n~~~~~~~~~~~~~~~~~~\n\nIn Amazon AWS, there are seven steps to creating an instance. To get\nstarted, login and select **Launch Instance**.\n\n**Step 1: Choose an Amazon Machine Image (AMI)** - Here we will select\nthe ``Deep Learning AMI (Ubuntu) Version 14.0``. As described, this\ninstance comes with many of the most popular deep learning frameworks\ninstalled and is preconfigured with CUDA, cuDNN, and NCCL. It is a very\ngood starting point for this tutorial.\n\n**Step 2: Choose an Instance Type** - Now, select the GPU compute unit\ncalled ``p2.8xlarge``. Notice, each of these instances has a different\ncost but this instance provides 8 NVIDIA Tesla K80 GPUs per node, and\nprovides a good architecture for multi-gpu distributed training.\n\n**Step 3: Configure Instance Details** - The only setting to change here\nis increasing the *Number of instances* to 2. All other configurations\nmay be left at default.\n\n**Step 4: Add Storage** - Notice, by default these nodes do not come\nwith a lot of storage (only 75 GB). For this tutorial, since we are only\nusing the STL-10 dataset, this is plenty of storage. But, if you want to\ntrain on a larger dataset such as ImageNet, you will have to add much\nmore storage just to fit the dataset and any trained models you wish to\nsave.\n\n**Step 5: Add Tags** - Nothing to be done here, just move on.\n\n**Step 6: Configure Security Group** - This is a critical step in the\nconfiguration process. By default two nodes in the same security group\nwould not be able to communicate in the distributed training setting.\nHere, we want to create a **new** security group for the two nodes to be\nin. However, we cannot finish configuring in this step. For now, just\nremember your new security group name (e.g. launch-wizard-12) then move\non to Step 7.\n\n**Step 7: Review Instance Launch** - Here, review the instance then\nlaunch it. By default, this will automatically start initializing the\ntwo instances. You can monitor the initialization progress from the\ndashboard.\n\nConfigure Security Group\n~~~~~~~~~~~~~~~~~~~~~~~~\n\nRecall that we were not able to properly configure the security group\nwhen creating the instances. Once you have launched the instance, select\nthe *Network & Security > Security Groups* tab in the EC2 dashboard.\nThis will bring up a list of security groups you have access to. Select\nthe new security group you created in Step 6 (i.e. launch-wizard-12),\nwhich will bring up tabs called *Description, Inbound, Outbound, and\nTags*. First, select the *Inbound* tab and *Edit* to add a rule to allow\n\"All Traffic\" from \"Sources\" in the launch-wizard-12 security group.\nThen select the *Outbound* tab and do the exact same thing. Now, we have\neffectively allowed all Inbound and Outbound traffic of all types\nbetween nodes in the launch-wizard-12 security group.\n\nNecessary Information\n~~~~~~~~~~~~~~~~~~~~~\n\nBefore continuing, we must find and remember the IP addresses of both\nnodes. In the EC2 dashboard find your running instances. For both\ninstances, write down the *IPv4 Public IP* and the *Private IPs*. For\nthe remainder of the document, we will refer to these as the\n**node0-publicIP**, **node0-privateIP**, **node1-publicIP**, and\n**node1-privateIP**. The public IPs are the addresses we will use to SSH\nin, and the private IPs will be used for inter-node communication.\n\n\n",
"_____no_output_____"
],
[
"Environment Setup\n-----------------\n\nThe next critical step is the setup of each node. Unfortunately, we\ncannot configure both nodes at the same time, so this process must be\ndone on each node separately. However, this is a one time setup, so once\nyou have the nodes configured properly you will not have to reconfigure\nfor future distributed training projects.\n\nThe first step, once logged onto the node, is to create a new conda\nenvironment with python 3.6 and numpy. Once created activate the\nenvironment.\n\n::\n\n $ conda create -n nightly_pt python=3.6 numpy\n $ source activate nightly_pt\n\nNext, we will install a nightly build of Cuda 9.0 enabled PyTorch with\npip in the conda environment.\n\n::\n\n $ pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cu90/torch_nightly.html\n\nWe must also install torchvision so we can use the torchvision model and\ndataset. At this time, we must build torchvision from source as the pip\ninstallation will by default install an old version of PyTorch on top of\nthe nightly build we just installed.\n\n::\n\n $ cd\n $ git clone https://github.com/pytorch/vision.git\n $ cd vision\n $ python setup.py install\n\nAnd finally, **VERY IMPORTANT** step is to set the network interface\nname for the NCCL socket. This is set with the environment variable\n``NCCL_SOCKET_IFNAME``. To get the correct name, run the ``ifconfig``\ncommand on the node and look at the interface name that corresponds to\nthe node's *privateIP* (e.g. ens3). Then set the environment variable as\n\n::\n\n $ export NCCL_SOCKET_IFNAME=ens3\n\nRemember, do this on both nodes. You may also consider adding the\nNCCL\\_SOCKET\\_IFNAME setting to your *.bashrc*. An important observation\nis that we did not setup a shared filesystem between the nodes.\nTherefore, each node will have to have a copy of the code and a copy of\nthe datasets. For more information about setting up a shared network\nfilesystem between nodes, see\n`here <https://aws.amazon.com/blogs/aws/amazon-elastic-file-system-shared-file-storage-for-amazon-ec2/>`__.\n\n\n",
"_____no_output_____"
],
[
"Distributed Training Code\n-------------------------\n\nWith the instances running and the environments setup we can now get\ninto the training code. Most of the code here has been taken from the\n`PyTorch ImageNet\nExample <https://github.com/pytorch/examples/tree/master/imagenet>`__\nwhich also supports distributed training. This code provides a good\nstarting point for a custom trainer as it has much of the boilerplate\ntraining loop, validation loop, and accuracy tracking functionality.\nHowever, you will notice that the argument parsing and other\nnon-essential functions have been stripped out for simplicity.\n\nIn this example we will use\n`torchvision.models.resnet18 <https://pytorch.org/docs/stable/torchvision/models.html#torchvision.models.resnet18>`__\nmodel and will train it on the\n`torchvision.datasets.STL10 <https://pytorch.org/docs/stable/torchvision/datasets.html#torchvision.datasets.STL10>`__\ndataset. To accomodate for the dimensionality mismatch of STL-10 with\nResnet18, we will resize each image to 224x224 with a transform. Notice,\nthe choice of model and dataset are orthogonal to the distributed\ntraining code, you may use any dataset and model you wish and the\nprocess is the same. Lets get started by first handling the imports and\ntalking about some helper functions. Then we will define the train and\ntest functions, which have been largely taken from the ImageNet Example.\nAt the end, we will build the main part of the code which handles the\ndistributed training setup. And finally, we will discuss how to actually\nrun the code.\n\n\n",
"_____no_output_____"
],
[
"Imports\n~~~~~~~\n\nThe important distributed training specific imports here are\n`torch.nn.parallel <https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel>`__,\n`torch.distributed <https://pytorch.org/docs/stable/distributed.html>`__,\n`torch.utils.data.distributed <https://pytorch.org/docs/stable/data.html#torch.utils.data.distributed.DistributedSampler>`__,\nand\n`torch.multiprocessing <https://pytorch.org/docs/stable/multiprocessing.html>`__.\nIt is also important to set the multiprocessing start method to *spawn*\nor *forkserver* (only supported in Python 3),\nas the default is *fork* which may cause deadlocks when using multiple\nworker processes for dataloading.\n\n\n",
"_____no_output_____"
]
],
[
[
"import time\nimport sys\nimport torch\n\nif __name__ == '__main__':\n torch.multiprocessing.set_start_method('spawn')\n\nimport torch.nn as nn\nimport torch.nn.parallel\nimport torch.distributed as dist\nimport torch.optim\nimport torch.utils.data\nimport torch.utils.data.distributed\nimport torchvision.transforms as transforms\nimport torchvision.datasets as datasets\nimport torchvision.models as models\n\nfrom torch.multiprocessing import Pool, Process",
"_____no_output_____"
]
],
[
[
"Helper Functions\n~~~~~~~~~~~~~~~~\n\nWe must also define some helper functions and classes that will make\ntraining easier. The ``AverageMeter`` class tracks training statistics\nlike accuracy and iteration count. The ``accuracy`` function computes\nand returns the top-k accuracy of the model so we can track learning\nprogress. Both are provided for training convenience but neither are\ndistributed training specific.\n\n\n",
"_____no_output_____"
]
],
[
[
"class AverageMeter(object):\n \"\"\"Computes and stores the average and current value\"\"\"\n def __init__(self):\n self.reset()\n\n def reset(self):\n self.val = 0\n self.avg = 0\n self.sum = 0\n self.count = 0\n\n def update(self, val, n=1):\n self.val = val\n self.sum += val * n\n self.count += n\n self.avg = self.sum / self.count\n\ndef accuracy(output, target, topk=(1,)):\n \"\"\"Computes the precision@k for the specified values of k\"\"\"\n with torch.no_grad():\n maxk = max(topk)\n batch_size = target.size(0)\n\n _, pred = output.topk(maxk, 1, True, True)\n pred = pred.t()\n correct = pred.eq(target.view(1, -1).expand_as(pred))\n\n res = []\n for k in topk:\n correct_k = correct[:k].view(-1).float().sum(0, keepdim=True)\n res.append(correct_k.mul_(100.0 / batch_size))\n return res",
"_____no_output_____"
]
],
[
[
"Train Functions\n~~~~~~~~~~~~~~~\n\nTo simplify the main loop, it is best to separate a training epoch step\ninto a function called ``train``. This function trains the input model\nfor one epoch of the *train\\_loader*. The only distributed training\nartifact in this function is setting the\n`non\\_blocking <https://pytorch.org/docs/stable/notes/cuda.html#use-pinned-memory-buffers>`__\nattributes of the data and label tensors to ``True`` before the forward\npass. This allows asynchronous GPU copies of the data meaning transfers\ncan be overlapped with computation. This function also outputs training\nstatistics along the way so we can track progress throughout the epoch.\n\nThe other function to define here is ``adjust_learning_rate``, which\ndecays the initial learning rate at a fixed schedule. This is another\nboilerplate trainer function that is useful to train accurate models.\n\n\n",
"_____no_output_____"
]
],
[
[
"def train(train_loader, model, criterion, optimizer, epoch):\n\n batch_time = AverageMeter()\n data_time = AverageMeter()\n losses = AverageMeter()\n top1 = AverageMeter()\n top5 = AverageMeter()\n\n # switch to train mode\n model.train()\n\n end = time.time()\n for i, (input, target) in enumerate(train_loader):\n\n # measure data loading time\n data_time.update(time.time() - end)\n\n # Create non_blocking tensors for distributed training\n input = input.cuda(non_blocking=True)\n target = target.cuda(non_blocking=True)\n\n # compute output\n output = model(input)\n loss = criterion(output, target)\n\n # measure accuracy and record loss\n prec1, prec5 = accuracy(output, target, topk=(1, 5))\n losses.update(loss.item(), input.size(0))\n top1.update(prec1[0], input.size(0))\n top5.update(prec5[0], input.size(0))\n\n # compute gradients in a backward pass\n optimizer.zero_grad()\n loss.backward()\n\n # Call step of optimizer to update model params\n optimizer.step()\n\n # measure elapsed time\n batch_time.update(time.time() - end)\n end = time.time()\n\n if i % 10 == 0:\n print('Epoch: [{0}][{1}/{2}]\\t'\n 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\\t'\n 'Data {data_time.val:.3f} ({data_time.avg:.3f})\\t'\n 'Loss {loss.val:.4f} ({loss.avg:.4f})\\t'\n 'Prec@1 {top1.val:.3f} ({top1.avg:.3f})\\t'\n 'Prec@5 {top5.val:.3f} ({top5.avg:.3f})'.format(\n epoch, i, len(train_loader), batch_time=batch_time,\n data_time=data_time, loss=losses, top1=top1, top5=top5))\n\ndef adjust_learning_rate(initial_lr, optimizer, epoch):\n \"\"\"Sets the learning rate to the initial LR decayed by 10 every 30 epochs\"\"\"\n lr = initial_lr * (0.1 ** (epoch // 30))\n for param_group in optimizer.param_groups:\n param_group['lr'] = lr",
"_____no_output_____"
]
],
[
[
"Validation Function\n~~~~~~~~~~~~~~~~~~~\n\nTo track generalization performance and simplify the main loop further\nwe can also extract the validation step into a function called\n``validate``. This function runs a full validation step of the input\nmodel on the input validation dataloader and returns the top-1 accuracy\nof the model on the validation set. Again, you will notice the only\ndistributed training feature here is setting ``non_blocking=True`` for\nthe training data and labels before they are passed to the model.\n\n\n",
"_____no_output_____"
]
],
[
[
"def validate(val_loader, model, criterion):\n\n batch_time = AverageMeter()\n losses = AverageMeter()\n top1 = AverageMeter()\n top5 = AverageMeter()\n\n # switch to evaluate mode\n model.eval()\n\n with torch.no_grad():\n end = time.time()\n for i, (input, target) in enumerate(val_loader):\n\n input = input.cuda(non_blocking=True)\n target = target.cuda(non_blocking=True)\n\n # compute output\n output = model(input)\n loss = criterion(output, target)\n\n # measure accuracy and record loss\n prec1, prec5 = accuracy(output, target, topk=(1, 5))\n losses.update(loss.item(), input.size(0))\n top1.update(prec1[0], input.size(0))\n top5.update(prec5[0], input.size(0))\n\n # measure elapsed time\n batch_time.update(time.time() - end)\n end = time.time()\n\n if i % 100 == 0:\n print('Test: [{0}/{1}]\\t'\n 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\\t'\n 'Loss {loss.val:.4f} ({loss.avg:.4f})\\t'\n 'Prec@1 {top1.val:.3f} ({top1.avg:.3f})\\t'\n 'Prec@5 {top5.val:.3f} ({top5.avg:.3f})'.format(\n i, len(val_loader), batch_time=batch_time, loss=losses,\n top1=top1, top5=top5))\n\n print(' * Prec@1 {top1.avg:.3f} Prec@5 {top5.avg:.3f}'\n .format(top1=top1, top5=top5))\n\n return top1.avg",
"_____no_output_____"
]
],
[
[
"Inputs\n~~~~~~\n\nWith the helper functions out of the way, now we have reached the\ninteresting part. Here is where we will define the inputs for the run.\nSome of the inputs are standard model training inputs such as batch size\nand number of training epochs, and some are specific to our distributed\ntraining task. The required inputs are:\n\n- **batch\\_size** - batch size for *each* process in the distributed\n training group. Total batch size across distributed model is\n batch\\_size\\*world\\_size\n\n- **workers** - number of worker processes used with the dataloaders in\n each process\n\n- **num\\_epochs** - total number of epochs to train for\n\n- **starting\\_lr** - starting learning rate for training\n\n- **world\\_size** - number of processes in the distributed training\n environment\n\n- **dist\\_backend** - backend to use for distributed training\n communication (i.e. NCCL, Gloo, MPI, etc.). In this tutorial, since\n we are using several multi-gpu nodes, NCCL is suggested.\n\n- **dist\\_url** - URL to specify the initialization method of the\n process group. This may contain the IP address and port of the rank0\n process or be a non-existant file on a shared file system. Here,\n since we do not have a shared file system this will incorporate the\n **node0-privateIP** and the port on node0 to use.\n\n\n",
"_____no_output_____"
]
],
[
[
"print(\"Collect Inputs...\")\n\n# Batch Size for training and testing\nbatch_size = 32\n\n# Number of additional worker processes for dataloading\nworkers = 2\n\n# Number of epochs to train for\nnum_epochs = 2\n\n# Starting Learning Rate\nstarting_lr = 0.1\n\n# Number of distributed processes\nworld_size = 4\n\n# Distributed backend type\ndist_backend = 'nccl'\n\n# Url used to setup distributed training\ndist_url = \"tcp://172.31.22.234:23456\"",
"_____no_output_____"
]
],
[
[
"Initialize process group\n~~~~~~~~~~~~~~~~~~~~~~~~\n\nOne of the most important parts of distributed training in PyTorch is to\nproperly setup the process group, which is the **first** step in\ninitializing the ``torch.distributed`` package. To do this, we will use\nthe ``torch.distributed.init_process_group`` function which takes\nseveral inputs. First, a *backend* input which specifies the backend to\nuse (i.e. NCCL, Gloo, MPI, etc.). An *init\\_method* input which is\neither a url containing the address and port of the rank0 machine or a\npath to a non-existant file on the shared file system. Note, to use the\nfile init\\_method, all machines must have access to the file, similarly\nfor the url method, all machines must be able to communicate on the\nnetwork so make sure to configure any firewalls and network settings to\naccomodate. The *init\\_process\\_group* function also takes *rank* and\n*world\\_size* arguments which specify the rank of this process when run\nand the number of processes in the collective, respectively.\nThe *init\\_method* input can also be \"env://\". In this case, the address\nand port of the rank0 machine will be read from the following two\nenvironment variables respectively: MASTER_ADDR, MASTER_PORT. If *rank*\nand *world\\_size* arguments are not specified in the *init\\_process\\_group*\nfunction, they both can be read from the following two environment\nvariables respectively as well: RANK, WORLD_SIZE.\n\nAnother important step, especially when each node has multiple gpus is\nto set the *local\\_rank* of this process. For example, if you have two\nnodes, each with 8 GPUs and you wish to train with all of them then\n$world\\_size=16$ and each node will have a process with local rank\n0-7. This local\\_rank is used to set the device (i.e. which GPU to use)\nfor the process and later used to set the device when creating a\ndistributed data parallel model. It is also recommended to use NCCL\nbackend in this hypothetical environment as NCCL is preferred for\nmulti-gpu nodes.\n\n\n",
"_____no_output_____"
]
],
[
[
"print(\"Initialize Process Group...\")\n# Initialize Process Group\n# v1 - init with url\ndist.init_process_group(backend=dist_backend, init_method=dist_url, rank=int(sys.argv[1]), world_size=world_size)\n# v2 - init with file\n# dist.init_process_group(backend=\"nccl\", init_method=\"file:///home/ubuntu/pt-distributed-tutorial/trainfile\", rank=int(sys.argv[1]), world_size=world_size)\n# v3 - init with environment variables\n# dist.init_process_group(backend=\"nccl\", init_method=\"env://\", rank=int(sys.argv[1]), world_size=world_size)\n\n\n# Establish Local Rank and set device on this node\nlocal_rank = int(sys.argv[2])\ndp_device_ids = [local_rank]\ntorch.cuda.set_device(local_rank)",
"_____no_output_____"
]
],
[
[
"Initialize Model\n~~~~~~~~~~~~~~~~\n\nThe next major step is to initialize the model to be trained. Here, we\nwill use a resnet18 model from ``torchvision.models`` but any model may\nbe used. First, we initialize the model and place it in GPU memory.\nNext, we make the model ``DistributedDataParallel``, which handles the\ndistribution of the data to and from the model and is critical for\ndistributed training. The ``DistributedDataParallel`` module also\nhandles the averaging of gradients across the world, so we do not have\nto explicitly average the gradients in the training step.\n\nIt is important to note that this is a blocking function, meaning\nprogram execution will wait at this function until *world\\_size*\nprocesses have joined the process group. Also, notice we pass our device\nids list as a parameter which contains the local rank (i.e. GPU) we are\nusing. Finally, we specify the loss function and optimizer to train the\nmodel with.\n\n\n",
"_____no_output_____"
]
],
[
[
"print(\"Initialize Model...\")\n# Construct Model\nmodel = models.resnet18(pretrained=False).cuda()\n# Make model DistributedDataParallel\nmodel = torch.nn.parallel.DistributedDataParallel(model, device_ids=dp_device_ids, output_device=local_rank)\n\n# define loss function (criterion) and optimizer\ncriterion = nn.CrossEntropyLoss().cuda()\noptimizer = torch.optim.SGD(model.parameters(), starting_lr, momentum=0.9, weight_decay=1e-4)",
"_____no_output_____"
]
],
[
[
"Initialize Dataloaders\n~~~~~~~~~~~~~~~~~~~~~~\n\nThe last step in preparation for the training is to specify which\ndataset to use. Here we use the `STL-10\ndataset <https://cs.stanford.edu/~acoates/stl10/>`__ from\n`torchvision.datasets.STL10 <https://pytorch.org/docs/stable/torchvision/datasets.html#torchvision.datasets.STL10>`__.\nThe STL10 dataset is a 10 class dataset of 96x96px color images. For use\nwith our model, we resize the images to 224x224px in the transform. One\ndistributed training specific item in this section is the use of the\n``DistributedSampler`` for the training set, which is designed to be\nused in conjunction with ``DistributedDataParallel`` models. This object\nhandles the partitioning of the dataset across the distributed\nenvironment so that not all models are training on the same subset of\ndata, which would be counterproductive. Finally, we create the\n``DataLoader``'s which are responsible for feeding the data to the\nprocesses.\n\nThe STL-10 dataset will automatically download on the nodes if they are\nnot present. If you wish to use your own dataset you should download the\ndata, write your own dataset handler, and construct a dataloader for\nyour dataset here.\n\n\n",
"_____no_output_____"
]
],
[
[
"print(\"Initialize Dataloaders...\")\n# Define the transform for the data. Notice, we must resize to 224x224 with this dataset and model.\ntransform = transforms.Compose(\n [transforms.Resize(224),\n transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\n\n# Initialize Datasets. STL10 will automatically download if not present\ntrainset = datasets.STL10(root='./data', split='train', download=True, transform=transform)\nvalset = datasets.STL10(root='./data', split='test', download=True, transform=transform)\n\n# Create DistributedSampler to handle distributing the dataset across nodes when training\n# This can only be called after torch.distributed.init_process_group is called\ntrain_sampler = torch.utils.data.distributed.DistributedSampler(trainset)\n\n# Create the Dataloaders to feed data to the training and validation steps\ntrain_loader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=(train_sampler is None), num_workers=workers, pin_memory=False, sampler=train_sampler)\nval_loader = torch.utils.data.DataLoader(valset, batch_size=batch_size, shuffle=False, num_workers=workers, pin_memory=False)",
"_____no_output_____"
]
],
[
[
"Training Loop\n~~~~~~~~~~~~~\n\nThe last step is to define the training loop. We have already done most\nof the work for setting up the distributed training so this is not\ndistributed training specific. The only detail is setting the current\nepoch count in the ``DistributedSampler``, as the sampler shuffles the\ndata going to each process deterministically based on epoch. After\nupdating the sampler, the loop runs a full training epoch, runs a full\nvalidation step then prints the performance of the current model against\nthe best performing model so far. After training for num\\_epochs, the\nloop exits and the tutorial is complete. Notice, since this is an\nexercise we are not saving models but one may wish to keep track of the\nbest performing model then save it at the end of training (see\n`here <https://github.com/pytorch/examples/blob/master/imagenet/main.py#L184>`__).\n\n\n",
"_____no_output_____"
]
],
[
[
"best_prec1 = 0\n\nfor epoch in range(num_epochs):\n # Set epoch count for DistributedSampler\n train_sampler.set_epoch(epoch)\n\n # Adjust learning rate according to schedule\n adjust_learning_rate(starting_lr, optimizer, epoch)\n\n # train for one epoch\n print(\"\\nBegin Training Epoch {}\".format(epoch+1))\n train(train_loader, model, criterion, optimizer, epoch)\n\n # evaluate on validation set\n print(\"Begin Validation @ Epoch {}\".format(epoch+1))\n prec1 = validate(val_loader, model, criterion)\n\n # remember best prec@1 and save checkpoint if desired\n # is_best = prec1 > best_prec1\n best_prec1 = max(prec1, best_prec1)\n\n print(\"Epoch Summary: \")\n print(\"\\tEpoch Accuracy: {}\".format(prec1))\n print(\"\\tBest Accuracy: {}\".format(best_prec1))",
"_____no_output_____"
]
],
[
[
"Running the Code\n----------------\n\nUnlike most of the other PyTorch tutorials, this code may not be run\ndirectly out of this notebook. To run, download the .py version of this\nfile (or convert it using\n`this <https://gist.github.com/chsasank/7218ca16f8d022e02a9c0deb94a310fe>`__)\nand upload a copy to both nodes. The astute reader would have noticed\nthat we hardcoded the **node0-privateIP** and $world\\_size=4$ but\ninput the *rank* and *local\\_rank* inputs as arg[1] and arg[2] command\nline arguments, respectively. Once uploaded, open two ssh terminals into\neach node.\n\n- On the first terminal for node0, run ``$ python main.py 0 0``\n\n- On the second terminal for node0 run ``$ python main.py 1 1``\n\n- On the first terminal for node1, run ``$ python main.py 2 0``\n\n- On the second terminal for node1 run ``$ python main.py 3 1``\n\nThe programs will start and wait after printing \"Initialize Model...\"\nfor all four processes to join the process group. Notice the first\nargument is not repeated as this is the unique global rank of the\nprocess. The second argument is repeated as that is the local rank of\nthe process running on the node. If you run ``nvidia-smi`` on each node,\nyou will see two processes on each node, one running on GPU0 and one on\nGPU1.\n\nWe have now completed the distributed training example! Hopefully you\ncan see how you would use this tutorial to help train your own models on\nyour own datasets, even if you are not using the exact same distributed\nenvrionment. If you are using AWS, don't forget to **SHUT DOWN YOUR\nNODES** if you are not using them or you may find an uncomfortably large\nbill at the end of the month.\n\n**Where to go next**\n\n- Check out the `launcher\n utility <https://pytorch.org/docs/stable/distributed.html#launch-utility>`__\n for a different way of kicking off the run\n\n- Check out the `torch.multiprocessing.spawn\n utility <https://pytorch.org/docs/master/multiprocessing.html#spawning-subprocesses>`__\n for another easy way of kicking off multiple distributed processes.\n `PyTorch ImageNet Example <https://github.com/pytorch/examples/tree/master/imagenet>`__\n has it implemented and can demonstrate how to use it.\n\n- If possible, setup a NFS so you only need one copy of the dataset\n\n\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0d7ef9a7804ba33aebff06183b25152ec04c14e | 46,904 | ipynb | Jupyter Notebook | notebooks/third_wave_models.ipynb | HimanshuMittal01/crop-radiation-prediction | c1cc04781c99bddb47538cb82348316045ddd40e | [
"MIT"
] | null | null | null | notebooks/third_wave_models.ipynb | HimanshuMittal01/crop-radiation-prediction | c1cc04781c99bddb47538cb82348316045ddd40e | [
"MIT"
] | null | null | null | notebooks/third_wave_models.ipynb | HimanshuMittal01/crop-radiation-prediction | c1cc04781c99bddb47538cb82348316045ddd40e | [
"MIT"
] | null | null | null | 162.297578 | 20,664 | 0.898324 | [
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns",
"_____no_output_____"
],
[
"cowpea = pd.read_excel('../data/prepared.xlsx', sheet_name='Cowpea')\nmaize = pd.read_excel('../data/prepared.xlsx', sheet_name='Maize')\nrice = pd.read_excel('../data/prepared.xlsx', sheet_name='Rice')\nchickpea = pd.read_excel('../data/prepared.xlsx', sheet_name='Chickpea')\nmustard = pd.read_excel('../data/prepared.xlsx', sheet_name='Mustard')",
"_____no_output_____"
],
[
"mustard['Date'] = pd.to_datetime(mustard['Date'])",
"_____no_output_____"
],
[
"X = []\ny = []\n# Assuming data is not missing\nunique_dates = mustard['Date'].unique()\nunique_times = mustard['Time'].unique()\n\nfor date in unique_dates:\n X.append(mustard[mustard['Date']==date][['GSR','CT']].values)\n y.append(mustard[mustard['Date']==date][['Rn']].values)\n\nX = np.array(X)\ny = np.ravel(y) # Flatten the y",
"_____no_output_____"
],
[
"print(\"X shape:\", X.shape)\nprint(\"y shape:\", y.shape)",
"X shape: (22, 9, 2)\ny shape: (198,)\n"
],
[
"import tensorflow as tf\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, LSTM, Input, TimeDistributed",
"_____no_output_____"
],
[
"model = Sequential()\nmodel.add(Input(shape=[9, 2]))\nmodel.add(LSTM(16, return_sequences=True))\nmodel.add(LSTM(16, return_sequences=True))\nmodel.add(TimeDistributed(Dense(16)))\nmodel.add(Dense(1))",
"_____no_output_____"
],
[
"print(\"Model input shape:\", model.input_shape)\nprint(\"Model output shape:\", model.output_shape)",
"Model input shape: (None, 9, 2)\nModel output shape: (None, 9, 1)\n"
],
[
"# Compile the model\nmodel.compile(optimizer='adam', loss='mse', metrics=[tf.keras.metrics.RootMeanSquaredError()])",
"_____no_output_____"
],
[
"# Model summary\nmodel.summary()",
"Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nlstm (LSTM) (None, 9, 16) 1216 \n_________________________________________________________________\nlstm_1 (LSTM) (None, 9, 16) 2112 \n_________________________________________________________________\ntime_distributed (TimeDistri (None, 9, 16) 272 \n_________________________________________________________________\ndense_1 (Dense) (None, 9, 1) 17 \n=================================================================\nTotal params: 3,617\nTrainable params: 3,617\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"history = model.fit(X, y, validation_split=0.25, epochs=2000, batch_size=256, verbose=0)",
"_____no_output_____"
],
[
"print(\"Training loss:\", history.history['loss'][-1])\nprint(\"Validation loss:\", history.history['val_loss'][-1])\n\nif 'root_mean_squared_error' in history.history.keys():\n print(\"Training loss:\", history.history['root_mean_squared_error'][-1])\n print(\"Validation loss:\", history.history['val_root_mean_squared_error'][-1])",
"Training loss: 12859.2470703125\nValidation loss: 22031.6484375\nTraining loss: 113.39861297607422\nValidation loss: 148.4306182861328\n"
],
[
"# summarize history for accuracy\nplt.plot(history.history['root_mean_squared_error'])\nplt.plot(history.history['val_root_mean_squared_error'])\nplt.title('model root_mean_squared_error')\nplt.ylabel('root_mean_squared_error')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper left')\nplt.show()\n# summarize history for loss\nplt.plot(history.history['loss'])\nplt.plot(history.history['val_loss'])\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epoch')\nplt.legend(['train', 'test'], loc='upper left')\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0d802a7c4740a5a45f64816249103d8e3479998 | 30,365 | ipynb | Jupyter Notebook | pml1/figure_notebooks/chapter1_introduction_figures.ipynb | brickbitbot/pml-book | d76a7b521c4d8c02478f18afff86b8ed086f676d | [
"MIT"
] | 2,984 | 2020-12-24T18:30:35.000Z | 2022-03-31T04:55:19.000Z | pml1/figure_notebooks/chapter1_introduction_figures.ipynb | brickbitbot/pml-book | d76a7b521c4d8c02478f18afff86b8ed086f676d | [
"MIT"
] | 290 | 2020-12-29T20:00:41.000Z | 2022-03-30T16:56:05.000Z | pml1/figure_notebooks/chapter1_introduction_figures.ipynb | brickbitbot/pml-book | d76a7b521c4d8c02478f18afff86b8ed086f676d | [
"MIT"
] | 352 | 2020-12-25T08:19:15.000Z | 2022-03-30T21:44:14.000Z | 31.564449 | 860 | 0.578791 | [
[
[
"# Copyright 2021 Google LLC\n# Use of this source code is governed by an MIT-style\n# license that can be found in the LICENSE file or at\n# https://opensource.org/licenses/MIT.\n# Notebook authors: Kevin P. Murphy ([email protected])\n# and Mahmoud Soliman ([email protected])\n\n# This notebook reproduces figures for chapter 1 from the book\n# \"Probabilistic Machine Learning: An Introduction\"\n# by Kevin Murphy (MIT Press, 2021).\n# Book pdf is available from http://probml.ai",
"_____no_output_____"
]
],
[
[
"<a href=\"https://opensource.org/licenses/MIT\" target=\"_parent\"><img src=\"https://img.shields.io/github/license/probml/pyprobml\"/></a>",
"_____no_output_____"
],
[
"<a href=\"https://colab.research.google.com/github/probml/pml-book/blob/main/pml1/figure_notebooks/chapter1_introduction_figures.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"## Figure 1.1:<a name='1.1'></a> <a name='iris'></a> ",
"_____no_output_____"
],
[
"\n Three types of Iris flowers: Setosa, Versicolor and Virginica. Used with kind permission of Dennis Kramb and SIGNA",
"_____no_output_____"
]
],
[
[
"#@title Click me to run setup { display-mode: \"form\" }\ntry:\n if PYPROBML_SETUP_ALREADY_RUN:\n print('skipping setup')\nexcept:\n PYPROBML_SETUP_ALREADY_RUN = True\n print('running setup...')\n !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null \n %cd -q /pyprobml/scripts\n %reload_ext autoreload \n %autoreload 2\n !pip install superimport deimport -qqq\n import superimport\ndef try_deimport():\n try: \n from deimport.deimport import deimport\n deimport(superimport)\n except Exception as e:\n print(e)\n print('finished!')",
"_____no_output_____"
]
],
[
[
"<img src=\"https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.1_A.png\" width=\"256\"/>",
"_____no_output_____"
],
[
"<img src=\"https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.1_B.png\" width=\"256\"/>",
"_____no_output_____"
],
[
"<img src=\"https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.1_C.png\" width=\"256\"/>",
"_____no_output_____"
],
[
"## Figure 1.2:<a name='1.2'></a> <a name='cat'></a> ",
"_____no_output_____"
],
[
"\n Illustration of the image classification problem. From https://cs231n.github.io/ . Used with kind permission of Andrej Karpathy",
"_____no_output_____"
]
],
[
[
"#@title Click me to run setup { display-mode: \"form\" }\ntry:\n if PYPROBML_SETUP_ALREADY_RUN:\n print('skipping setup')\nexcept:\n PYPROBML_SETUP_ALREADY_RUN = True\n print('running setup...')\n !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null \n %cd -q /pyprobml/scripts\n %reload_ext autoreload \n %autoreload 2\n !pip install superimport deimport -qqq\n import superimport\ndef try_deimport():\n try: \n from deimport.deimport import deimport\n deimport(superimport)\n except Exception as e:\n print(e)\n print('finished!')",
"_____no_output_____"
]
],
[
[
"<img src=\"https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.2.png\" width=\"256\"/>",
"_____no_output_____"
],
[
"## Figure 1.3:<a name='1.3'></a> <a name='irisPairs'></a> ",
"_____no_output_____"
],
[
"\n Visualization of the Iris data as a pairwise scatter plot. On the diagonal we plot the marginal distribution of each feature for each class. The off-diagonals contain scatterplots of all possible pairs of features. \nFigure(s) generated by [iris_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/iris_plot.py) ",
"_____no_output_____"
]
],
[
[
"#@title Click me to run setup { display-mode: \"form\" }\ntry:\n if PYPROBML_SETUP_ALREADY_RUN:\n print('skipping setup')\nexcept:\n PYPROBML_SETUP_ALREADY_RUN = True\n print('running setup...')\n !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null \n %cd -q /pyprobml/scripts\n %reload_ext autoreload \n %autoreload 2\n !pip install superimport deimport -qqq\n import superimport\ndef try_deimport():\n try: \n from deimport.deimport import deimport\n deimport(superimport)\n except Exception as e:\n print(e)\n print('finished!')",
"_____no_output_____"
],
[
"try_deimport()\n%run -n iris_plot.py",
"_____no_output_____"
]
],
[
[
"## Figure 1.4:<a name='1.4'></a> <a name='dtreeIrisDepth2'></a> ",
"_____no_output_____"
],
[
"\n Example of a decision tree of depth 2 applied to the Iris data, using just the petal length and petal width features. Leaf nodes are color coded according to the predicted class. The number of training samples that pass from the root to a node is shown inside each box; we show how many values of each class fall into this node. This vector of counts can be normalized to get a distribution over class labels for each node. We can then pick the majority class. Adapted from Figures 6.1 and 6.2 of <a href='#Geron2019'>[Aur19]</a> . ",
"_____no_output_____"
],
[
"To reproduce this figure, click the open in colab button: <a href=\"https://colab.research.google.com/github/probml/probml-notebooks/blob/master/notebooks/iris_dtree.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"#@title Click me to run setup { display-mode: \"form\" }\ntry:\n if PYPROBML_SETUP_ALREADY_RUN:\n print('skipping setup')\nexcept:\n PYPROBML_SETUP_ALREADY_RUN = True\n print('running setup...')\n !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null \n %cd -q /pyprobml/scripts\n %reload_ext autoreload \n %autoreload 2\n !pip install superimport deimport -qqq\n import superimport\ndef try_deimport():\n try: \n from deimport.deimport import deimport\n deimport(superimport)\n except Exception as e:\n print(e)\n print('finished!')",
"_____no_output_____"
]
],
[
[
"<img src=\"https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.4_A.png\" width=\"256\"/>",
"_____no_output_____"
],
[
"<img src=\"https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.4_B.png\" width=\"256\"/>",
"_____no_output_____"
],
[
"## Figure 1.5:<a name='1.5'></a> <a name='linreg'></a> ",
"_____no_output_____"
],
[
"\n(a) Linear regression on some 1d data. (b) The vertical lines denote the residuals between the observed output value for each input (blue circle) and its predicted value (red cross). The goal of least squares regression is to pick a line that minimizes the sum of squared residuals. \nFigure(s) generated by [linreg_residuals_plot.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_residuals_plot.py) ",
"_____no_output_____"
]
],
[
[
"#@title Click me to run setup { display-mode: \"form\" }\ntry:\n if PYPROBML_SETUP_ALREADY_RUN:\n print('skipping setup')\nexcept:\n PYPROBML_SETUP_ALREADY_RUN = True\n print('running setup...')\n !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null \n %cd -q /pyprobml/scripts\n %reload_ext autoreload \n %autoreload 2\n !pip install superimport deimport -qqq\n import superimport\ndef try_deimport():\n try: \n from deimport.deimport import deimport\n deimport(superimport)\n except Exception as e:\n print(e)\n print('finished!')",
"_____no_output_____"
],
[
"try_deimport()\n%run -n linreg_residuals_plot.py",
"_____no_output_____"
]
],
[
[
"## Figure 1.6:<a name='1.6'></a> <a name='polyfit2d'></a> ",
"_____no_output_____"
],
[
"\n Linear and polynomial regression applied to 2d data. Vertical axis is temperature, horizontal axes are location within a room. Data was collected by some remote sensing motes at Intel's lab in Berkeley, CA (data courtesy of Romain Thibaux). (a) The fitted plane has the form $ f ( \\bm x ) = w_0 + w_1 x_1 + w_2 x_2$. (b) Temperature data is fitted with a quadratic of the form $ f ( \\bm x ) = w_0 + w_1 x_1 + w_2 x_2 + w_3 x_1^2 + w_4 x_2^2$. \nFigure(s) generated by [linreg_2d_surface_demo.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_2d_surface_demo.py) ",
"_____no_output_____"
]
],
[
[
"#@title Click me to run setup { display-mode: \"form\" }\ntry:\n if PYPROBML_SETUP_ALREADY_RUN:\n print('skipping setup')\nexcept:\n PYPROBML_SETUP_ALREADY_RUN = True\n print('running setup...')\n !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null \n %cd -q /pyprobml/scripts\n %reload_ext autoreload \n %autoreload 2\n !pip install superimport deimport -qqq\n import superimport\ndef try_deimport():\n try: \n from deimport.deimport import deimport\n deimport(superimport)\n except Exception as e:\n print(e)\n print('finished!')",
"_____no_output_____"
],
[
"try_deimport()\n%run -n linreg_2d_surface_demo.py",
"_____no_output_____"
]
],
[
[
"## Figure 1.7:<a name='1.7'></a> <a name='linregPoly'></a> ",
"_____no_output_____"
],
[
"\n(a-c) Polynomials of degrees 2, 14 and 20 fit to 21 datapoints (the same data as in \\cref fig:linreg ). (d) MSE vs degree. \nFigure(s) generated by [linreg_poly_vs_degree.py](https://github.com/probml/pyprobml/blob/master/scripts/linreg_poly_vs_degree.py) ",
"_____no_output_____"
]
],
[
[
"#@title Click me to run setup { display-mode: \"form\" }\ntry:\n if PYPROBML_SETUP_ALREADY_RUN:\n print('skipping setup')\nexcept:\n PYPROBML_SETUP_ALREADY_RUN = True\n print('running setup...')\n !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null \n %cd -q /pyprobml/scripts\n %reload_ext autoreload \n %autoreload 2\n !pip install superimport deimport -qqq\n import superimport\ndef try_deimport():\n try: \n from deimport.deimport import deimport\n deimport(superimport)\n except Exception as e:\n print(e)\n print('finished!')",
"_____no_output_____"
],
[
"try_deimport()\n%run -n linreg_poly_vs_degree.py",
"_____no_output_____"
]
],
[
[
"## Figure 1.8:<a name='1.8'></a> <a name='eqn:irisClustering'></a> ",
"_____no_output_____"
],
[
"\n(a) A scatterplot of the petal features from the iris dataset. (b) The result of unsupervised clustering using $K=3$. \nFigure(s) generated by [iris_kmeans.py](https://github.com/probml/pyprobml/blob/master/scripts/iris_kmeans.py) ",
"_____no_output_____"
]
],
[
[
"#@title Click me to run setup { display-mode: \"form\" }\ntry:\n if PYPROBML_SETUP_ALREADY_RUN:\n print('skipping setup')\nexcept:\n PYPROBML_SETUP_ALREADY_RUN = True\n print('running setup...')\n !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null \n %cd -q /pyprobml/scripts\n %reload_ext autoreload \n %autoreload 2\n !pip install superimport deimport -qqq\n import superimport\ndef try_deimport():\n try: \n from deimport.deimport import deimport\n deimport(superimport)\n except Exception as e:\n print(e)\n print('finished!')",
"_____no_output_____"
],
[
"try_deimport()\n%run -n iris_kmeans.py",
"_____no_output_____"
]
],
[
[
"## Figure 1.9:<a name='1.9'></a> <a name='pcaDemo'></a> ",
"_____no_output_____"
],
[
"\n(a) Scatterplot of iris data (first 3 features). Points are color coded by class. (b) We fit a 2d linear subspace to the 3d data using PCA. The class labels are ignored. Red dots are the original data, black dots are points generated from the model using $ \\bm x = \\mathbf W \\bm z + \\bm \\mu $, where $ \\bm z $ are latent points on the underlying inferred 2d linear manifold. \nFigure(s) generated by [iris_pca.py](https://github.com/probml/pyprobml/blob/master/scripts/iris_pca.py) ",
"_____no_output_____"
]
],
[
[
"#@title Click me to run setup { display-mode: \"form\" }\ntry:\n if PYPROBML_SETUP_ALREADY_RUN:\n print('skipping setup')\nexcept:\n PYPROBML_SETUP_ALREADY_RUN = True\n print('running setup...')\n !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null \n %cd -q /pyprobml/scripts\n %reload_ext autoreload \n %autoreload 2\n !pip install superimport deimport -qqq\n import superimport\ndef try_deimport():\n try: \n from deimport.deimport import deimport\n deimport(superimport)\n except Exception as e:\n print(e)\n print('finished!')",
"_____no_output_____"
],
[
"try_deimport()\n%run -n iris_pca.py",
"_____no_output_____"
]
],
[
[
"## Figure 1.10:<a name='1.10'></a> <a name='humanoid'></a> ",
"_____no_output_____"
],
[
"\n Examples of some control problems. (a) Space Invaders Atari game. From https://gym.openai.com/envs/SpaceInvaders-v0/ . (b) Controlling a humanoid robot in the MuJuCo simulator so it walks as fast as possible without falling over. From https://gym.openai.com/envs/Humanoid-v2/ ",
"_____no_output_____"
]
],
[
[
"#@title Click me to run setup { display-mode: \"form\" }\ntry:\n if PYPROBML_SETUP_ALREADY_RUN:\n print('skipping setup')\nexcept:\n PYPROBML_SETUP_ALREADY_RUN = True\n print('running setup...')\n !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null \n %cd -q /pyprobml/scripts\n %reload_ext autoreload \n %autoreload 2\n !pip install superimport deimport -qqq\n import superimport\ndef try_deimport():\n try: \n from deimport.deimport import deimport\n deimport(superimport)\n except Exception as e:\n print(e)\n print('finished!')",
"_____no_output_____"
]
],
[
[
"<img src=\"https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.10_A.png\" width=\"256\"/>",
"_____no_output_____"
],
[
"<img src=\"https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.10_B.png\" width=\"256\"/>",
"_____no_output_____"
],
[
"## Figure 1.11:<a name='1.11'></a> <a name='cake'></a> ",
"_____no_output_____"
],
[
"\n The three types of machine learning visualized as layers of a chocolate cake. This figure (originally from https://bit.ly/2m65Vs1 ) was used in a talk by Yann LeCun at NIPS'16, and is used with his kind permission",
"_____no_output_____"
]
],
[
[
"#@title Click me to run setup { display-mode: \"form\" }\ntry:\n if PYPROBML_SETUP_ALREADY_RUN:\n print('skipping setup')\nexcept:\n PYPROBML_SETUP_ALREADY_RUN = True\n print('running setup...')\n !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null \n %cd -q /pyprobml/scripts\n %reload_ext autoreload \n %autoreload 2\n !pip install superimport deimport -qqq\n import superimport\ndef try_deimport():\n try: \n from deimport.deimport import deimport\n deimport(superimport)\n except Exception as e:\n print(e)\n print('finished!')",
"_____no_output_____"
]
],
[
[
"<img src=\"https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.11.png\" width=\"256\"/>",
"_____no_output_____"
],
[
"## Figure 1.12:<a name='1.12'></a> <a name='emnist'></a> ",
"_____no_output_____"
],
[
"\n(a) Visualization of the MNIST dataset. Each image is $28 \\times 28$. There are 60k training examples and 10k test examples. We show the first 25 images from the training set. \nFigure(s) generated by [mnist_viz_tf.py](https://github.com/probml/pyprobml/blob/master/scripts/mnist_viz_tf.py) [emnist_viz_pytorch.py](https://github.com/probml/pyprobml/blob/master/scripts/emnist_viz_pytorch.py) ",
"_____no_output_____"
]
],
[
[
"#@title Click me to run setup { display-mode: \"form\" }\ntry:\n if PYPROBML_SETUP_ALREADY_RUN:\n print('skipping setup')\nexcept:\n PYPROBML_SETUP_ALREADY_RUN = True\n print('running setup...')\n !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null \n %cd -q /pyprobml/scripts\n %reload_ext autoreload \n %autoreload 2\n !pip install superimport deimport -qqq\n import superimport\ndef try_deimport():\n try: \n from deimport.deimport import deimport\n deimport(superimport)\n except Exception as e:\n print(e)\n print('finished!')",
"_____no_output_____"
],
[
"try_deimport()\n%run -n mnist_viz_tf.py",
"_____no_output_____"
],
[
"try_deimport()\n%run -n emnist_viz_pytorch.py",
"_____no_output_____"
]
],
[
[
"## Figure 1.13:<a name='1.13'></a> <a name='CIFAR'></a> ",
"_____no_output_____"
],
[
"\n(a) Visualization of the Fashion-MNIST dataset <a href='#fashion'>[XRV17]</a> . The dataset has the same size as MNIST, but is harder to classify. There are 10 classes: T-shirt/top, Trouser, Pullover, Dress, Coat, Sandal, Shirt, Sneaker, Bag, Ankle-boot. We show the first 25 images from the training set. \nFigure(s) generated by [fashion_viz_tf.py](https://github.com/probml/pyprobml/blob/master/scripts/fashion_viz_tf.py) [cifar_viz_tf.py](https://github.com/probml/pyprobml/blob/master/scripts/cifar_viz_tf.py) ",
"_____no_output_____"
]
],
[
[
"#@title Click me to run setup { display-mode: \"form\" }\ntry:\n if PYPROBML_SETUP_ALREADY_RUN:\n print('skipping setup')\nexcept:\n PYPROBML_SETUP_ALREADY_RUN = True\n print('running setup...')\n !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null \n %cd -q /pyprobml/scripts\n %reload_ext autoreload \n %autoreload 2\n !pip install superimport deimport -qqq\n import superimport\ndef try_deimport():\n try: \n from deimport.deimport import deimport\n deimport(superimport)\n except Exception as e:\n print(e)\n print('finished!')",
"_____no_output_____"
],
[
"try_deimport()\n%run -n fashion_viz_tf.py",
"_____no_output_____"
],
[
"try_deimport()\n%run -n cifar_viz_tf.py",
"_____no_output_____"
]
],
[
[
"## Figure 1.14:<a name='1.14'></a> <a name='imagenetError'></a> ",
"_____no_output_____"
],
[
"\n(a) Sample images from the \\bf ImageNet dataset <a href='#ILSVRC15'>[Rus+15]</a> . This subset consists of 1.3M color training images, each of which is $256 \\times 256$ pixels in size. There are 1000 possible labels, one per image, and the task is to minimize the top-5 error rate, i.e., to ensure the correct label is within the 5 most probable predictions. Below each image we show the true label, and a distribution over the top 5 predicted labels. If the true label is in the top 5, its probability bar is colored red. Predictions are generated by a convolutional neural network (CNN) called ``AlexNet'' (\\cref sec:alexNet ). From Figure 4 of <a href='#Krizhevsky12'>[KSH12]</a> . Used with kind permission of Alex Krizhevsky. (b) Misclassification rate (top 5) on the ImageNet competition over time. Used with kind permission of Andrej Karpathy",
"_____no_output_____"
]
],
[
[
"#@title Click me to run setup { display-mode: \"form\" }\ntry:\n if PYPROBML_SETUP_ALREADY_RUN:\n print('skipping setup')\nexcept:\n PYPROBML_SETUP_ALREADY_RUN = True\n print('running setup...')\n !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null \n %cd -q /pyprobml/scripts\n %reload_ext autoreload \n %autoreload 2\n !pip install superimport deimport -qqq\n import superimport\ndef try_deimport():\n try: \n from deimport.deimport import deimport\n deimport(superimport)\n except Exception as e:\n print(e)\n print('finished!')",
"_____no_output_____"
]
],
[
[
"<img src=\"https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.14_A.png\" width=\"256\"/>",
"_____no_output_____"
],
[
"<img src=\"https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.14_B.png\" width=\"256\"/>",
"_____no_output_____"
],
[
"## Figure 1.15:<a name='1.15'></a> <a name='termDoc'></a> ",
"_____no_output_____"
],
[
"\n Example of a term-document matrix, where raw counts have been replaced by their TF-IDF values (see \\cref sec:tfidf ). Darker cells are larger values. From https://bit.ly/2kByLQI . Used with kind permission of Christoph Carl Kling",
"_____no_output_____"
]
],
[
[
"#@title Click me to run setup { display-mode: \"form\" }\ntry:\n if PYPROBML_SETUP_ALREADY_RUN:\n print('skipping setup')\nexcept:\n PYPROBML_SETUP_ALREADY_RUN = True\n print('running setup...')\n !git clone --depth 1 https://github.com/probml/pyprobml /pyprobml &> /dev/null \n %cd -q /pyprobml/scripts\n %reload_ext autoreload \n %autoreload 2\n !pip install superimport deimport -qqq\n import superimport\ndef try_deimport():\n try: \n from deimport.deimport import deimport\n deimport(superimport)\n except Exception as e:\n print(e)\n print('finished!')",
"_____no_output_____"
]
],
[
[
"<img src=\"https://raw.githubusercontent.com/probml/pml-book/main/pml1/figures/images/Figure_1.15.png\" width=\"256\"/>",
"_____no_output_____"
],
[
"## References:\n <a name='Geron2019'>[Aur19]</a> G. Aur'elien \"Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques for BuildingIntelligent Systems (2nd edition)\". (2019). \n\n<a name='Krizhevsky12'>[KSH12]</a> A. Krizhevsky, I. Sutskever and G. Hinton. \"Imagenet classification with deep convolutional neural networks\". (2012). \n\n<a name='ILSVRC15'>[Rus+15]</a> O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg and L. Fei-Fei. \"ImageNet Large Scale Visual Recognition Challenge\". In: ijcv (2015). \n\n<a name='fashion'>[XRV17]</a> H. Xiao, K. Rasul and R. Vollgraf. \"Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms\". abs/1708.07747 (2017). arXiv: 1708.07747 \n\n",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d0d81a5c7e78d622708f6ae8ab208fb39ee31139 | 5,474 | ipynb | Jupyter Notebook | examples/introduction.ipynb | jahn96/Jupyter-Bifrost | decc3ff7f4748f8e5db65e8a746299f9fd390af7 | [
"BSD-3-Clause"
] | null | null | null | examples/introduction.ipynb | jahn96/Jupyter-Bifrost | decc3ff7f4748f8e5db65e8a746299f9fd390af7 | [
"BSD-3-Clause"
] | null | null | null | examples/introduction.ipynb | jahn96/Jupyter-Bifrost | decc3ff7f4748f8e5db65e8a746299f9fd390af7 | [
"BSD-3-Clause"
] | null | null | null | 40.548148 | 1,450 | 0.597552 | [
[
[
"# Introduction",
"_____no_output_____"
]
],
[
[
"import jupyter_bifrost\nimport pandas as pd\nimport numpy as np\nfrom sklearn.datasets import load_iris",
"_____no_output_____"
],
[
"cols = [\"foo\",\"y\", \"bar\", \"baz\", \"something\", \"else\"]\ndist = np.random.uniform(0,1, size=(1000,len(cols)))\nscatter_df = pd.DataFrame(dist, columns=cols)\n\nbar_df = pd.DataFrame([\n [\"John\", \"Developer\", 1],\n [\"Jay\", \"Developer\", 2],\n [\"Angela\", \"Designer\", 1],\n [\"Brian\", \"Leader\", 20]\n], columns=[\"Name\", \"Job\", \"Years Worked For Jupyter\"])\n\niris_ds = load_iris()\niris_df = pd.DataFrame(iris_ds[\"data\"], columns=iris_ds[\"feature_names\"])\ntitanic_df = pd.read_csv('https://raw.githubusercontent.com/mwaskom/seaborn-data/master/titanic.csv')\niris_df['class'] = ['iris_setosa'] * 50 + ['iris_versicolour'] * 50 + ['iris_virginica'] * 50 ",
"_____no_output_____"
],
[
"iris_df = iris_df[(iris_df['class'].isin([\"iris_versicolour\",\"iris_virginica\"]))&(iris_df['petal length (cm)'] >= 2.8355468749999995) & (iris_df['petal length (cm)'] <= 6.055546875)]\n",
"_____no_output_____"
],
[
"titanic_df[\"age\"].isna()",
"_____no_output_____"
],
[
"x = [\"zap\", \"1,2\", \"age\", \"better\"]\nsorted(x)",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0d81ddbdeffd66a80e769c714c2866071fb1774 | 75,871 | ipynb | Jupyter Notebook | docs/tutorial/Using -pipe- notation.ipynb | Ecoent/biosteam | f1371386d089df3aa8ce041175f210c0318c1fe0 | [
"MIT"
] | 1 | 2020-03-06T13:59:30.000Z | 2020-03-06T13:59:30.000Z | docs/tutorial/Using -pipe- notation.ipynb | Ecoent/biosteam | f1371386d089df3aa8ce041175f210c0318c1fe0 | [
"MIT"
] | null | null | null | docs/tutorial/Using -pipe- notation.ipynb | Ecoent/biosteam | f1371386d089df3aa8ce041175f210c0318c1fe0 | [
"MIT"
] | null | null | null | 282.048327 | 64,112 | 0.897075 | [
[
[
"# Using -pipe- notation",
"_____no_output_____"
],
[
"**Connecting unit operations can be simplified through -pipe- notation. As an example, here we create a process with multiple units and connect them as a demonstration.**",
"_____no_output_____"
],
[
"With -pipe- notation you can get stream outputs and set stream inputs in the following format:",
"_____no_output_____"
]
],
[
[
"# U1-n -> U1.outs[n]\n# U1-[0, 1] -> [U1.outs[i] for i in [0, 1]]\n# s1-U1 -> U1.ins[:] = [s1]\n# s1-n-U1 -> U1.ins[n] = s1\n# [s1, s2]-U1 -> U1.ins[:] = [s1, s2]\n# U1-n1-n2-U2 -> U2.ins[n2] = U1.outs[n1]",
"_____no_output_____"
]
],
[
[
"As an example, create 2 feeds, 2 Mixers and 2 Splitters:",
"_____no_output_____"
]
],
[
[
"import biosteam as bst\nimport thermosteam as tmo\nfrom biosteam.units import Mixer, Splitter\n\n# Set property pacakge\nchemicals = tmo.Chemicals(['Water'])\ntmo.settings.set_thermo(chemicals)\n\n# Set feed stream and units\nfeed1 = tmo.Stream('feed1')\nM1 = Mixer('M1', outs='s1')\nS1 = Splitter('S1', outs=('s2', 'product1'), split=0.5)\nfeed2 = tmo.Stream('feed2')\nM2 = Mixer('M2', outs='s3')\nS2 = Splitter('S2', outs=('recycle', 'product2'), split=0.5)\nbst.find.diagram()",
"_____no_output_____"
]
],
[
[
"Now connect streams linearly along the units, and create a loop between S2 and M1:",
"_____no_output_____"
]
],
[
[
"# In -pipe- notation:\n(feed1, S2-0)-M1-S1\n(feed2, S1-0)-M2-S2\n\n# Without -pipe- notation:\n# M1.ins[:] = (feed1, S2.outs[0])\n# S1.ins[:] = M1.outs\n# M2.ins[:] = (feed2, S1.outs[0])\n# S2.ins[:] = M2.outs\n\nbst.find.diagram(format='png')",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0d8220d6fec853c080c8215a7f31fee8b5600b5 | 2,384 | ipynb | Jupyter Notebook | Intro_to_Python.ipynb | Daenarics/Elective-1-3 | 49b8513b46edf63a4cc58b05a769c21ed9a561d2 | [
"Apache-2.0"
] | null | null | null | Intro_to_Python.ipynb | Daenarics/Elective-1-3 | 49b8513b46edf63a4cc58b05a769c21ed9a561d2 | [
"Apache-2.0"
] | null | null | null | Intro_to_Python.ipynb | Daenarics/Elective-1-3 | 49b8513b46edf63a4cc58b05a769c21ed9a561d2 | [
"Apache-2.0"
] | null | null | null | 23.372549 | 234 | 0.454698 | [
[
[
"<a href=\"https://colab.research.google.com/github/Daenarics/Elective-1-3/blob/main/Intro_to_Python.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"##Python Indention",
"_____no_output_____"
]
],
[
[
"if 5<2:\n print(\"Five is less than two\")\nelse:\n print(\"Five is greater than two\")",
"Five is greater than two\n"
]
],
[
[
"##Python Comments",
"_____no_output_____"
]
],
[
[
"# This is a program that displays Hello, World\n\nprint(\"Hello, World\")\nprint('Welcome to Python Programming')",
"Hello, World\nWelcome to Python Programming\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0d82dc3f5fa7eca8b0d1a750aa066be9b91a019 | 69,522 | ipynb | Jupyter Notebook | biobb_wf_cwl_tutorial/notebooks/biobb_CWL_tutorial.ipynb | longr/biobb_wf_cwl_tutorial | 87b1378fdfd37bc4a232dae95486a1c4040c1505 | [
"Apache-2.0"
] | null | null | null | biobb_wf_cwl_tutorial/notebooks/biobb_CWL_tutorial.ipynb | longr/biobb_wf_cwl_tutorial | 87b1378fdfd37bc4a232dae95486a1c4040c1505 | [
"Apache-2.0"
] | null | null | null | biobb_wf_cwl_tutorial/notebooks/biobb_CWL_tutorial.ipynb | longr/biobb_wf_cwl_tutorial | 87b1378fdfd37bc4a232dae95486a1c4040c1505 | [
"Apache-2.0"
] | 2 | 2020-06-17T10:36:41.000Z | 2020-09-03T13:13:12.000Z | 48.548883 | 1,423 | 0.629815 | [
[
[
"# Common Workflow Language with BioExcel Building Blocks\n### Based on the Protein MD Setup tutorial using BioExcel Building Blocks (biobb)\n***\nThis tutorial aims to illustrate the process of **building up a CWL workflow** using the **BioExcel Building Blocks library (biobb)**. The tutorial is based on the **Protein Gromacs MD Setup** [Jupyter Notebook tutorial](https://github.com/bioexcel/biobb_wf_md_setup). \n***\n**Biobb modules** used:\n\n - [biobb_io](https://github.com/bioexcel/biobb_io): Tools to fetch biomolecular data from public databases.\n - [biobb_model](https://github.com/bioexcel/biobb_model): Tools to model macromolecular structures.\n - [biobb_md](https://github.com/bioexcel/biobb_md): Tools to setup and run Molecular Dynamics simulations.\n - [biobb_analysis](https://github.com/bioexcel/biobb_analysis): Tools to analyse Molecular Dynamics trajectories.\n \n**Software requirements**:\n\n - [cwltool](https://github.com/common-workflow-language/cwltool): Common Workflow Language tool description reference implementation.\n - [docker](https://www.docker.com/): Docker container platform.\n\n***\n### Tutorial Sections:\n 1. [CWL workflows: Brief Introduction](#intro)\n \n \n 2. [BioExcel building blocks TOOLS CWL Descriptions](#tools)\n \n * [Tool Building Block CWL Sections](#toolcwl)\n * [Complete Pdb Building Block CWL description](#pdbcwl)\n \n \n 3. [BioExcel building blocks WORKFLOWS CWL Descriptions](#workflows)\n \n * [Header](#cwlheader)\n * [Inputs](#inputs)\n * [Outputs](#outputs)\n * [Steps](#steps)\n * [Input of a Run](#run)\n * [Complete Workflow](#wf)\n * [Running the CWL workflow](#runwf)\n * [Cwltool workflow output](#wfoutput)\n \n \n 4. [Protein MD-Setup CWL workflow with BioExcel building blocks](#mdsetup)\n \n * [Steps](#mdsteps)\n * [Inputs](#mdinputs)\n * [Outputs](#mdoutputs)\n * [Complete Workflow](#mdworkflow)\n * [Input of a Run](#mdrun)\n * [Running the CWL workflow](#mdcwlrun)\n\n \n 5. [Questions & Comments](#questions)",
"_____no_output_____"
],
[
"***\n\n<img src=\"logo.png\" />\n\n***",
"_____no_output_____"
],
[
"<a id=\"intro\"></a>\n## CWL workflows: Brief Introduction\n\nThe **Common Workflow Language (CWL)** is an open standard for describing analysis **workflows and tools** in a way that makes them **portable and scalable** across a variety of software and hardware environments, from workstations to cluster, cloud, and high performance computing (HPC) environments.\n\n**CWL** is a community-led specification to express **portable workflow and tool descriptions**, which can be executed by **multiple leading workflow engine implementations**. Unlike previous standardisation attempts, CWL has taken a pragmatic approach and focused on what most workflow systems are able to do: Execute command line tools and pass files around in a top-to-bottom pipeline. At the heart of CWL workflows are the **tool descriptions**. A command line is described, with parameters, input and output files, in a **YAML format** so they can be shared across workflows and linked to from registries like **ELIXIR’s bio.tools**. These are then combined and wired together in a **second YAML file** to form a workflow template, which can be **executed on any of the supported implementations**, repeatedly and **on different platforms** by specifying input files and workflow parameters. The [CWL User Guide](https://www.commonwl.org/user_guide/index.html) gives a gentle introduction to the language, while the more detailed [CWL specifications](https://www.commonwl.org/v1.1/) formalize CWL concepts so they can be implemented by the different workflow systems. A couple of **BioExcel webinars** were focused on **CWL**, an [introduction to CWL](https://www.youtube.com/watch?v=jfQb1HJWRac) and a [new open source tool to run CWL workflows on LSF (CWLEXEC)](https://www.youtube.com/watch?v=_jSTZMWtPAY).\n\n**BioExcel building blocks** are all **described in CWL**. A specific **CWL** section in the **workflow manager adapters** [github repository](https://github.com/bioexcel/biobb_adapters/tree/master/biobb_adapters/cwl) gathers all the descriptions, divided in the different categories: io, md, analysis, chemistry, model and pmx (see updated table [here](http://mmb.irbbarcelona.org/webdev/slim/biobb/public/availability/source)).\n\nIn this tutorial, we are going to use these **BioExcel building blocks CWL descriptions** to build a **CWL** biomolecular workflow. In particular, the assembled workflow will perform a complete **Molecular Dynamics setup** (MD Setup) using **GROMACS MD package**, taking as a base the **Protein Gromacs MD Setup** [Jupyter Notebook tutorial](https://github.com/bioexcel/biobb_wf_md_setup). \n\nNo additional installation is required apart from the **Docker platform** and the **CWL tool reference executor**, as the **building blocks** will be launched using their associated **Docker containers**. ",
"_____no_output_____"
],
[
"***\n<a id=\"tools\"></a>\n\n## BioExcel building blocks TOOLS CWL Descriptions\n\nWriting a workflow in CWL using the **BioExcel building blocks** is possible thanks to the already generated **CWL descriptions** for all the **building blocks** (wrappers). A specific **CWL** section in the **workflow manager adapters** [github repository](https://github.com/bioexcel/biobb_adapters/tree/master/biobb_adapters/cwl) gathers all the descriptions, divided in the different categories: io, md, analysis, chemistry, model and pmx (see updated table [here](http://mmb.irbbarcelona.org/webdev/slim/biobb/public/availability/source)).\n\n***\n<a id=\"toolcwl\"></a>\n### Tool Building Block CWL sections:\n\n**Example**: Step 1 of the workflow, download a **protein structure** from the **PDB database**. The building block used for this is the [Pdb](https://github.com/bioexcel/biobb_io/blob/master/biobb_io/api/pdb.py) building block, from the [biobb_io](https://github.com/bioexcel/biobb_io) package, including tools to **fetch biomolecular data from public databases**. The **CWL description** for this building block can be found in the [adapters github repo](https://github.com/bioexcel/biobb_adapters/blob/master/biobb_adapters/cwl/biobb_io/mmb_api/pdb.cwl), and is shown in the following notebook cell. Description files like this one for all the steps of the workflow are needed to build and run a **CLW workflow**. To build a **CWL workflow** with **BioExcel building blocks**, one just need to download all the needed description files from the [biobb_adapters github](https://github.com/bioexcel/biobb_adapters/blob/master/biobb_adapters/cwl). \n\nThis particular example of a **Pdb building block** is useful to illustrate the most important points of the **CWL description**:\n* **hints**: The **CWL hints** section describes the **process requirements** that should (but not have to) be satisfied to run the wrapped command. The implementation may report a **warning** if a hint cannot be satisfied. In the **BioExcel building blocks**, a **DockerRequirement** subsection is always present in the **hints** section, pointing to the associated **Docker container**. The **dockerPull: parameter** takes the same value that you would pass to a **docker pull** command. That is, the name of the **container image**. In this case we have used the container called **biobb_io:latest** that can be found in the **quay.io repository**, which contains the **Pdb** building block.",
"_____no_output_____"
]
],
[
[
"hints:\n DockerRequirement:\n dockerPull: quay.io/biocontainers/biobb_io:latest",
"_____no_output_____"
]
],
[
[
"* **namespaces and schemas**: Input and output **metadata** may be represented within a tool or workflow. Such **metadata** must use a **namespace prefix** listed in the **$namespaces and $schemas sections** of the document. All **BioExcel building blocks CWL specifications** use the **EDAM ontology** (http://edamontology.org/) as **namespace**, with all terms included in its **Web Ontology Language** (owl) of knowledge representation (http://edamontology.org/EDAM_1.22.owl). **BioExcel** is contributing to the expansion of the **EDAM ontology** with the addition of new structural terms such as [GROMACS XTC format](http://edamontology.org/format_3875) or the [trajectory visualization operation](http://edamontology.org/operation_3890).",
"_____no_output_____"
]
],
[
[
"$namespaces:\n edam: http://edamontology.org/\n$schemas:\n - http://edamontology.org/EDAM_1.22.owl",
"_____no_output_____"
]
],
[
[
"* **inputs**: The **inputs section** of a **tool** contains a list of input parameters that **control how to run the tool**. Each parameter has an **id** for the name of parameter, and **type** describing what types of values are valid for that parameter. Available primitive types are *string, int, long, float, double, and null*; complex types are *array and record*; in addition there are special types *File, Directory and Any*. The field **inputBinding** is optional and indicates whether and how the input parameter should appear on the tool’s command line, in which **position** (position), and with which **name** (prefix). The **default field** stores the **default value** for the particular **input parameter**. <br>In this particular example, the **Pdb building block** has two different **input parameters**: *output_pdb_path* and *config*. The *output_pdb_path* input parameter defines the name of the **output file** that will contain the downloaded **PDB structure**. The *config* parameter is common to all **BioExcel building blocks**, and gathers all the **properties** of the building block in a **json format**. The **question mark** after the string type (*string?*) denotes that this input is **optional**. ",
"_____no_output_____"
]
],
[
[
"inputs:\n output_pdb_path:\n type: string\n inputBinding:\n position: 1\n prefix: --output_pdb_path\n default: 'downloaded_structure.pdb'\n\n config:\n type: string?\n inputBinding:\n position: 2\n prefix: --config\n default: '{\"pdb_code\" : \"1aki\"}'",
"_____no_output_____"
]
],
[
[
"* **outputs**: The **outputs section** of a **tool** contains a list of output parameters that should be returned after running the **tool**. Similarly to the inputs section, each parameter has an **id** for the name of parameter, and **type** describing what types of values are valid for that parameter. The **outputBinding** field describes how to set the value of each output parameter. The **glob field** consists of the name of a file in the **output directory**. In the **BioExcel building blocks**, every **output** has an associated **input parameter** defined in the previous input section, defining the name of the file to be generated. <br>In the particular **Pdb building block** example, the *output_pdb_file* parameter of type *File* is coupled to the *output_pdb_path* input parameter, using the **outputBinding** and the **glob** fields. The standard **PDB** format of the output file is also specified using the **EDAM ontology** format id 1476 ([edam:format_1476](http://edamontology.org/format_1476)). ",
"_____no_output_____"
]
],
[
[
"outputs:\n output_pdb_file:\n type: File\n format: edam:format_1476\n outputBinding:\n glob: $(inputs.output_pdb_path)",
"_____no_output_____"
]
],
[
[
"For more information on CWL tools description, please refer to the [CWL User Guide](https://www.commonwl.org/user_guide/index.html) or the [CWL specifications](https://www.commonwl.org/v1.1/).\n***\n<a id=\"pdbcwl\"></a>\n### Complete Pdb Building Block CWL description:\n\nExample of a **BioExcel building block CWL description** (pdb from biobb_io package)",
"_____no_output_____"
]
],
[
[
"# Example of a BioExcel building block CWL description (pdb from biobb_io package)\n\n#!/usr/bin/env cwl-runner\ncwlVersion: v1.0\nclass: CommandLineTool\nbaseCommand: pdb\nhints:\n DockerRequirement:\n dockerPull: quay.io/biocontainers/biobb_io:latest\n \ninputs:\n output_pdb_path:\n type: string\n inputBinding:\n position: 1\n prefix: --output_pdb_path\n default: 'downloaded_structure.pdb'\n\n config:\n type: string?\n inputBinding:\n position: 2\n prefix: --config\n default: '{\"pdb_code\" : \"1aki\"}'\n \noutputs:\n output_pdb_file:\n type: File\n format: edam:format_1476\n outputBinding:\n glob: $(inputs.output_pdb_path)\n\n$namespaces:\n edam: http://edamontology.org/\n$schemas:\n - http://edamontology.org/EDAM_1.22.owl",
"_____no_output_____"
]
],
[
[
"***\n<a id=\"workflows\"></a>\n## BioExcel building blocks WORKFLOWS CWL Descriptions\n\nNow that we have seen the **BioExcel building blocks CWL descriptions**, we can use them to build our first **biomolecular workflow** as a demonstrator. All **CWL workflows** are divided in **two files**: the **CWL description** and the **YAML** or **JSON** files containing **all workflow inputs**. Starting with the **CWL workflow description**, let's explore our first example **section by section**.\n<a id=\"cwlheader\"></a>\n### Header:\n\n* **cwlVersion** field indicates the version of the **CWL spec** used by the document.\n* **class** field indicates this document describes a **workflow**.",
"_____no_output_____"
]
],
[
[
"# !/usr/bin/env cwl-runner\n\ncwlVersion: v1.0\nclass: Workflow\nlabel: Example CWL Header\ndoc: |\n An example of how to create a CWl header. We have specified the version\n of CWL that we are using; the class, which is a 'workflow'. The label\n field should provide a short title or description of the workflow and\n the description should provide a longer description of what the workflow\n doe.",
"_____no_output_____"
]
],
[
[
"<a id=\"inputs\"></a>\n### Inputs:\n\nThe **inputs section** describes the inputs for **each of the steps** of the workflow. The **BioExcel building blocks (biobb)** have three types of **input parameters**: **input**, **output**, and **properties**. The **properties** parameter, which contains all the input parameters that are neither **input** nor **output files**, is defined in **JSON format** (see examples in the **Protein MD Setup** [Jupyter Notebook tutorial](https://github.com/bioexcel/biobb_wf_md_setup)).\n\n**Example**: Step 1 of the workflow, download a **protein structure** from the **PDB database**. Two different **inputs** are needed for this step: the **name of the file** that will contain the downloaded PDB structure (*step1_output_name*), and the **properties** of the building block (*step1_properties*), that in this case will indicate the PDB code to look for (see **Input of a run** section). Both input parameters have type *string* in this **building block**. ",
"_____no_output_____"
]
],
[
[
"# CWL workflow inputs section example\ninputs:\n step1_output_name: string\n step1_properties: string",
"_____no_output_____"
]
],
[
[
"<a id=\"outputs\"></a>\n### Outputs:\n\nThe **outputs:** section describes the set of **final outputs** from the **workflow**. These outputs can be a collection of outputs from **different steps of the workflow**. Each output is a `key: value` pair. The `key` should be a unique identifier, and the value should be a dictionary (consisting of `key: value` pairs). These `keys` consists of `label`, which is a title or name for the output; `doc`, which is a longer description of what this output is; `type`, which is the data type expected; and `outputSource`, which connects the output parameter of a **particular step** to the **workflow final output parameter**.",
"_____no_output_____"
]
],
[
[
"outputs:\n pdb: #unique identifier\n label: Protein structure \n doc: |\n Step 1 of the workflow, download a 'protein structure' from the\n 'PDB database'. The *pdb* 'output' is a 'file' containing the\n 'protein structure' in 'PDB format', which is connected to the\n output parameter *output_pdb_file* of the 'step1 of the workflow'\n (*step1_pdb*).\n type: File #data type\n outputSource: step1_pdb/output_pdb_file",
"_____no_output_____"
]
],
[
[
"<a id=\"steps\"></a>\n### Steps:\n\nThe **steps section** describes the actual steps of the workflow. Steps are **connected** one to the other through the **input parameters**.\n\n**Workflow steps** are not necessarily run in the order they are listed, instead **the order is determined by the dependencies between steps**. In addition, workflow steps which do not depend on one another may run **in parallel**.\n\n**Example**: Step 1 and 2 of the workflow, download a **protein structure** from the **PDB database**, and **fix the side chains**, adding any side chain atoms missing in the original structure. Note how **step1 and step2** are **connected** through the **output** of one and the **input** of the other: **Step2** (*step2_fixsidechain*) receives as **input** (*input_pdb_path*) the **output of the step1** (*step1_pdb*), identified as *step1_pdb/output_pdb_file*.",
"_____no_output_____"
]
],
[
[
"# CWL workflow steps section example\n step1_pdb:\n label: Fetch PDB Structure\n doc: |\n Download a protein structure from the PDB database\n run: biobb/biobb_adapters/cwl/biobb_io/mmb_api/pdb.cwl\n in:\n output_pdb_path: step1_pdb_name\n config: step1_pdb_config\n out: [output_pdb_file]\n\n step2_fixsidechain:\n label: Fix Protein structure\n doc: |\n Fix the side chains, adding any side chain atoms missing in the\n original structure.\n run: biobb/biobb_adapters/cwl/biobb_model/model/fix_side_chain.cwl\n in:\n input_pdb_path: step1_pdb/output_pdb_file\n out: [output_pdb_file]",
"_____no_output_____"
]
],
[
[
"<a id=\"run\"></a>\n### Input of a run:\n\nAs previously stated, all **CWL workflows** are divided in **two files**: the **CWL description** and the **YAML** or **JSON** files containing **all workflow inputs**. In this example, we are going to produce a **YAML** formatted object in a separate file describing the **inputs of our run**.\n\n**Example**: Step 1 of the workflow, download a **protein structure** from the **PDB database**. The **step1_output_name** contains the name of the file that is going to be produced by the **building block**, whereas the **JSON-formatted properties** (**step1_properties**) contain the **pdb code** of the structure to be downloaded: \n\n* step1_output_name: **\"tutorial_1aki.pdb\"**\n* step1_properties: **{\"pdb_code\" : \"1aki\"}**",
"_____no_output_____"
]
],
[
[
"step1_output_name: 'tutorial_1aki.pdb'\nstep1_properties: '{\"pdb_code\" : \"1aki\"}'",
"_____no_output_____"
]
],
[
[
"<a id=\"wf\"></a>\n### Complete workflow:\n\nExample of a short **CWL workflow** with **BioExcel building blocks**, which retrieves a **PDB file** for the **Lysozyme protein structure** from the RCSB PDB database (**step1: pdb.cwl**), and fixes the possible problems in the structure, adding **missing side chain atoms** if needed (**step2: fix_side_chain.cwl**). ",
"_____no_output_____"
]
],
[
[
"# !/usr/bin/env cwl-runner\n\ncwlVersion: v1.0\nclass: Workflow\nlabel: Example of a short CWL workflow with BioExcel building blocks\ndoc: |\n Example of a short 'CWL workflow' with 'BioExcel building blocks', which\n retrieves a 'PDB file' for the 'Lysozyme protein structure' from the RCSB PDB\n database ('step1: pdb.cwl'), and fixes the possible problems in the structure,\n adding 'missing side chain atoms' if needed ('step2: fix_side_chain.cwl').\n\ninputs:\n step1_properties: '{\"pdb_code\" : \"1aki\"}'\n step1_output_name: 'tutorial_1aki.pdb'\n\noutputs:\n pdb:\n type: File\n outputSource: step2_fixsidechain/output_pdb_file\n\nsteps:\n step1_pdb:\n label: Fetch PDB Structure\n doc: |\n Download a protein structure from the PDB database\n run: biobb_adapters/pdb.cwl\n in:\n output_pdb_path: step1_output_name\n config: step1_properties\n out: [output_pdb_file]\n \n step2_fixsidechain:\n label: Fix Protein structure\n doc: |\n Fix the side chains, adding any side chain atoms missing in the\n original structure.\n run: biobb_adapters/fix_side_chain.cwl\n in:\n input_pdb_path: step1_pdb/output_pdb_file\n out: [output_pdb_file]",
"_____no_output_____"
]
],
[
[
"<a id=\"runwf\"></a>\n### Running the CWL workflow:\n\nThe final step of the process is **running the workflow described in CWL**. For that, the description presented in the previous cell should be written to a file (e.g. BioExcel-CWL-firstWorkflow.cwl), the **YAML** input should be written to a separate file (e.g. BioExcel-CWL-firstWorkflow-job.yml) and finally both files should be used with the **CWL tool description reference implementation executer** (cwltool). \n\nIt is important to note that in order to properly run the **CWL workflow**, the **CWL descriptions** for all the **building blocks** used in the **workflow** should be accessible from the file system. In this example, all the **CWL descriptions** needed where downloaded from the [BioExcel building blocks adapters github repository](https://github.com/bioexcel/biobb_adapters/tree/master/biobb_adapters/cwl) to a folder named **biobb_adapters**. \n\nThe **command line** is shown in the cell below:",
"_____no_output_____"
]
],
[
[
"# Run CWL workflow with CWL tool description reference implementation (cwltool).\ncwltool BioExcel-CWL-firstWorkflow.cwl BioExcel-CWL-firstWorkflow-job.yml",
"_____no_output_____"
]
],
[
[
"<a id=\"wfoutput\"></a>\n### Cwltool workflow output\n\nThe **execution of the workflow** will write information to the standard output such as the **step being performed**, the **way it is run** (command line, docker container, etc.), **inputs and outputs** used, and **state of each step** (success, failed). The next cell contains a **real output** for the **execution of our first example**:",
"_____no_output_____"
]
],
[
[
"Resolved 'BioExcel-CWL-firstWorkflow.cwl' to 'file:///PATH/biobb_wf_md_setup/cwl/BioExcel-CWL-firstWorkflow.cwl'\n[workflow BioExcel-CWL-firstWorkflow.cwl] start\n[step step1_pdb] start\n[job step1_pdb] /private/tmp/docker_tmp1g8y0wu0$ docker \\\n run \\\n -i \\\n --volume=/private/tmp/docker_tmp1g8y0wu0:/private/var/spool/cwl:rw \\\n --volume=/private/var/folders/7f/0hxgf3d971b98lk_fps26jx40000gn/T/tmps4_pw5tj:/tmp:rw \\\n --workdir=/private/var/spool/cwl \\\n --read-only=true \\\n --user=501:20 \\\n --rm \\\n --env=TMPDIR=/tmp \\\n --env=HOME=/private/var/spool/cwl \\\n quay.io/biocontainers/biobb_io:0.1.3--py_0 \\\n pdb \\\n --config \\\n '{\"pdb_code\" : \"1aki\"}' \\\n --output_pdb_path \\\n tutorial.pdb\n2019-10-24 08:42:06,235 [MainThread ] [INFO ] Downloading: 1aki from: https://files.rcsb.org/download/1aki.pdb\n2019-10-24 08:42:07,594 [MainThread ] [INFO ] Writting pdb to: /private/var/spool/cwl/tutorial.pdb\n2019-10-24 08:42:07,607 [MainThread ] [INFO ] Filtering lines NOT starting with one of these words: ['ATOM', 'MODEL', 'ENDMDL']\n[job step1_pdb] completed success\n[step step1_pdb] completed success\n[step step2_fixsidechain] start\n[job step2_fixsidechain] /private/tmp/docker_tmpuaecttdd$ docker \\\n run \\\n -i \\\n --volume=/private/tmp/docker_tmpuaecttdd:/private/var/spool/cwl:rw \\\n --volume=/private/var/folders/7f/0hxgf3d971b98lk_fps26jx40000gn/T/tmp9t_nks8r:/tmp:rw \\\n --volume=/private/tmp/docker_tmp1g8y0wu0/tutorial.pdb:/private/var/lib/cwl/stg5b2950e7-ef54-4df6-be70-677050c4c258/tutorial.pdb:ro \\\n --workdir=/private/var/spool/cwl \\\n --read-only=true \\\n --user=501:20 \\\n --rm \\\n --env=TMPDIR=/tmp \\\n --env=HOME=/private/var/spool/cwl \\\n quay.io/biocontainers/biobb_model:0.1.3--py_0 \\\n fix_side_chain \\\n --input_pdb_path \\\n /private/var/lib/cwl/stg5b2950e7-ef54-4df6-be70-677050c4c258/tutorial.pdb \\\n --output_pdb_path \\\n fixed.pdb\n[job step2_fixsidechain] completed success\n[step step2_fixsidechain] completed success\n[workflow BioExcel-CWL-firstWorkflow.cwl] completed success\n{\n \"pdb\": {\n \"location\": \"file:///PATH/biobb_wf_md_setup/cwl/fixed.pdb\",\n \"basename\": \"fixed.pdb\",\n \"class\": \"File\",\n \"checksum\": \"sha1$3ef7a955f93f25af5e59b85bcf4cb1d0bbf69a40\",\n \"size\": 81167,\n \"format\": \"http://edamontology.org/format_1476\",\n \"path\": \"/PATH/biobb_wf_md_setup/cwl/fixed.pdb\"\n }\n}\nFinal process status is success",
"_____no_output_____"
]
],
[
[
"***\n<a id=\"mdsetup\"></a>\n## Protein MD-Setup CWL workflow with BioExcel building blocks \n\nThe last step of this **tutorial** illustrates the building of a **complex CWL workflow**. The example used is the **Protein Gromacs MD Setup** [Jupyter Notebook tutorial](https://github.com/bioexcel/biobb_wf_md_setup). It is strongly recommended to take a look at this **notebook** before moving on to the next sections of this **tutorial**, as it contains information for all the **building blocks** used. The aim of this **tutorial** is to illustrate how to build **CWL workflows** using the **BioExcel building blocks**. For information about the science behind every step of the workflow, please refer to the **Protein Gromacs MD Setup** Jupyter Notebook tutorial. The **workflow** presented in the next cells is a translation of the very same workflow to **CWL language**, including the same **number of steps** (23) and **building blocks**. \n<a id=\"mdsteps\"></a>\n### Steps:\n\nFirst of all, let's define the **steps of the workflow**. \n\n* **Fetching PDB Structure**: step 1\n* **Fix Protein Structure**: step 2\n* **Create Protein System Topology**: step 3\n* **Create Solvent Box**: step 4\n* **Fill the Box with Water Molecules**: step 5 \n* **Adding Ions**: steps 6 and 7\n* **Energetically Minimize the System**: steps 8, 9 and 10\n* **Equilibrate the System (NVT)**: steps 11, 12 and 13\n* **Equilibrate the System (NPT)**: steps 14, 15 and 16\n* **Free Molecular Dynamics Simulation**: steps 17 and 18\n* **Post-processing Resulting 3D Trajectory**: steps 19 to 23\n\nMandatory and optional **inputs** and **outputs** of every **building block** can be consulted in the appropriate **documentation** pages from the corresponding **BioExcel building block** category (see updated table [here](http://mmb.irbbarcelona.org/webdev/slim/biobb/public/availability/source)). ",
"_____no_output_____"
]
],
[
[
" step1_pdb:\n label: Fetch PDB Structure\n doc: |\n Download a protein structure from the PDB database\n run: biobb/biobb_adapters/cwl/biobb_io/mmb_api/pdb.cwl\n in:\n output_pdb_path: step1_pdb_name\n config: step1_pdb_config\n out: [output_pdb_file]\n\n step2_fixsidechain:\n label: Fix Protein structure\n doc: |\n Fix the side chains, adding any side chain atoms missing in the\n original structure.\n run: biobb/biobb_adapters/cwl/biobb_model/model/fix_side_chain.cwl\n in:\n input_pdb_path: step1_pdb/output_pdb_file\n out: [output_pdb_file]\n\n step3_pdb2gmx:\n label: Create Protein System Topology\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/pdb2gmx.cwl\n in:\n input_pdb_path: step2_fixsidechain/output_pdb_file\n out: [output_gro_file, output_top_zip_file]\n\n step4_editconf:\n label: Create Solvent Box\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/editconf.cwl\n in:\n input_gro_path: step3_pdb2gmx/output_gro_file\n out: [output_gro_file]\n\n step5_solvate:\n label: Fill the Box with Water Molecules\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/solvate.cwl\n in:\n input_solute_gro_path: step4_editconf/output_gro_file\n input_top_zip_path: step3_pdb2gmx/output_top_zip_file\n out: [output_gro_file, output_top_zip_file]\n\n step6_grompp_genion:\n label: Add Ions - part 1\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/grompp.cwl\n in:\n config: step6_gppion_config\n input_gro_path: step5_solvate/output_gro_file\n input_top_zip_path: step5_solvate/output_top_zip_file\n out: [output_tpr_file]\n\n step7_genion:\n label: Add Ions - part 2\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/genion.cwl\n in:\n config: step7_genion_config\n input_tpr_path: step6_grompp_genion/output_tpr_file\n input_top_zip_path: step5_solvate/output_top_zip_file\n out: [output_gro_file, output_top_zip_file]\n\n step8_grompp_min:\n label: Energetically Minimize the System - part 1\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/grompp.cwl\n in:\n config: step8_gppmin_config\n input_gro_path: step7_genion/output_gro_file\n input_top_zip_path: step7_genion/output_top_zip_file\n out: [output_tpr_file]\n\n step9_mdrun_min:\n label: Energetically Minimize the System - part 2\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/mdrun.cwl\n in:\n input_tpr_path: step8_grompp_min/output_tpr_file\n out: [output_trr_file, output_gro_file, output_edr_file, output_log_file]\n\n step10_energy_min:\n label: Energetically Minimize the System - part 3\n run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_energy.cwl\n in:\n config: step10_energy_min_config\n output_xvg_path: step10_energy_min_name\n input_energy_path: step9_mdrun_min/output_edr_file\n out: [output_xvg_file]\n\n step11_grompp_nvt:\n label: Equilibrate the System (NVT) - part 1\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/grompp.cwl\n in:\n config: step11_gppnvt_config\n input_gro_path: step9_mdrun_min/output_gro_file\n input_top_zip_path: step7_genion/output_top_zip_file\n out: [output_tpr_file]\n\n step12_mdrun_nvt:\n label: Equilibrate the System (NVT) - part 2\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/mdrun.cwl\n in:\n input_tpr_path: step11_grompp_nvt/output_tpr_file\n out: [output_trr_file, output_gro_file, output_edr_file, output_log_file, output_cpt_file]\n\n step13_energy_nvt:\n label: Equilibrate the System (NVT) - part 3\n run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_energy.cwl\n in:\n config: step13_energy_nvt_config\n output_xvg_path: step13_energy_nvt_name\n input_energy_path: step12_mdrun_nvt/output_edr_file\n out: [output_xvg_file]\n\n step14_grompp_npt:\n label: Equilibrate the System (NPT) - part 1\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/grompp.cwl\n in:\n config: step14_gppnpt_config\n input_gro_path: step12_mdrun_nvt/output_gro_file\n input_top_zip_path: step7_genion/output_top_zip_file\n input_cpt_path: step12_mdrun_nvt/output_cpt_file\n out: [output_tpr_file]\n\n step15_mdrun_npt:\n label: Equilibrate the System (NPT) - part 2\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/mdrun.cwl\n in:\n input_tpr_path: step14_grompp_npt/output_tpr_file\n out: [output_trr_file, output_gro_file, output_edr_file, output_log_file, output_cpt_file]\n\n step16_energy_npt:\n label: Equilibrate the System (NPT) - part 3\n run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_energy.cwl\n in:\n config: step16_energy_npt_config\n output_xvg_path: step16_energy_npt_name\n input_energy_path: step15_mdrun_npt/output_edr_file\n out: [output_xvg_file]\n\n step17_grompp_md:\n label: Free Molecular Dynamics Simulation - part 1\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/grompp.cwl\n in:\n config: step17_gppmd_config\n input_gro_path: step15_mdrun_npt/output_gro_file\n input_top_zip_path: step7_genion/output_top_zip_file\n input_cpt_path: step15_mdrun_npt/output_cpt_file\n out: [output_tpr_file]\n\n step18_mdrun_md:\n label: Free Molecular Dynamics Simulation - part 2\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/mdrun.cwl\n in:\n input_tpr_path: step17_grompp_md/output_tpr_file\n out: [output_trr_file, output_gro_file, output_edr_file, output_log_file, output_cpt_file]\n\n step19_rmsfirst:\n label: Post-processing Resulting 3D Trajectory - part 1\n run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_rms.cwl\n in:\n config: step19_rmsfirst_config\n output_xvg_path: step19_rmsfirst_name\n input_structure_path: step17_grompp_md/output_tpr_file\n input_traj_path: step18_mdrun_md/output_trr_file\n out: [output_xvg_file]\n\n step20_rmsexp:\n label: Post-processing Resulting 3D Trajectory - part 2\n run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_rms.cwl\n in:\n config: step20_rmsexp_config\n output_xvg_path: step20_rmsexp_name\n input_structure_path: step8_grompp_min/output_tpr_file\n input_traj_path: step18_mdrun_md/output_trr_file\n out: [output_xvg_file]\n\n step21_rgyr:\n label: Post-processing Resulting 3D Trajectory - part 3\n run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_rgyr.cwl\n in:\n config: step21_rgyr_config\n input_structure_path: step8_grompp_min/output_tpr_file\n input_traj_path: step18_mdrun_md/output_trr_file\n out: [output_xvg_file]\n\n step22_image:\n label: Post-processing Resulting 3D Trajectory - part 4\n run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_image.cwl\n in:\n config: step22_image_config\n input_top_path: step17_grompp_md/output_tpr_file\n input_traj_path: step18_mdrun_md/output_trr_file\n out: [output_traj_file]\n\n step23_dry:\n label: Post-processing Resulting 3D Trajectory - part 5\n run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_trjconv_str.cwl\n in:\n config: step23_dry_config\n input_structure_path: step18_mdrun_md/output_gro_file\n input_top_path: step17_grompp_md/output_tpr_file\n out: [output_str_file]",
"_____no_output_____"
]
],
[
[
"<a id=\"mdinputs\"></a>\n### Inputs:\n\nAll inputs for the **BioExcel building blocks** are defined as *strings*. Not all the steps in this particular example need **external inputs**, some of them just works using as input/s an output (or outputs) from **previous steps** (e.g. step2_fixsidechain). For the steps that need input, all of them will receive a **JSON** formatted input (of type string), with the **properties parameters** of the **building blocks** (config). Apart from that, some of the **building blocks** in this example are receiving two different input parameters: the **properties** (e.g. *step1_pdb_config*) and the **name of the output file** to be written (e.g. *step1_pdb_name*). This is particularly useful to identify the files generated by different steps of the **workflow**. Besides, in cases where the same **building block** is used more than once, using the **default value** for the **output files** will cause the **overwritting** of the results generated by previous steps (e.g. energy calculation steps). \n\nAll these inputs will be filled up with values from the **separated YAML input file**. ",
"_____no_output_____"
]
],
[
[
"inputs:\n step1_pdb_name: string\n step1_pdb_config: string\n step4_editconf_config: string\n step6_gppion_config: string\n step7_genion_config: string\n step8_gppmin_config: string\n step10_energy_min_config: string\n step10_energy_min_name: string\n step11_gppnvt_config: string\n step13_energy_nvt_config: string\n step13_energy_nvt_name: string\n step14_gppnpt_config: string\n step16_energy_npt_config: string\n step16_energy_npt_name: string\n step17_gppmd_config: string\n step19_rmsfirst_config: string\n step19_rmsfirst_name: string\n step20_rmsexp_config: string\n step20_rmsexp_name: string\n step21_rgyr_config: string\n step22_image_config: string\n step23_dry_config: string",
"_____no_output_____"
]
],
[
[
"<a id=\"mdoutputs\"></a>\n### Outputs:\n\nThe **outputs section** contains the set of **final outputs** from the **workflow**. In this case, **outputs** from **different steps** of the **workflow** are considered **final outputs**:\n\n* **Trajectories**: \n * **trr**: Raw trajectory from the *free* simulation step.\n * **trr_imaged_dry**: Post-processed trajectory, dehydrated, imaged (rotations and translations removed) and centered.\n* **Structures**: \n * **gro**: Raw structure from the *free* simulation step.\n * **gro_dry**: Resulting protein structure taken from the post-processed trajectory, to be used as a topology, usually for visualization purposes. \n* **Topologies**: \n * **tpr**: GROMACS portable binary run input file, containing the starting structure of the simulation, the molecular topology and all the simulation parameters.\n * **top**: GROMACS topology file, containing the molecular topology in an ASCII readable format.\n* **System Setup Observables**:\n * **xvg_min**: Potential energy of the system during the minimization step. \n * **xvg_nvt**: Temperature of the system during the NVT equilibration step. \n * **xvg_npt**: Pressure and density of the system (box) during the NPT equilibration step.\n* **Simulation Analysis**:\n * **xvg_rmsfirst**: Root Mean Square deviation (RMSd) throughout the whole *free* simulation step against the first snapshot of the trajectory (equilibrated system).\n * **xvg_rmsexp**: Root Mean Square deviation (RMSd) throughout the whole *free* simulation step against the experimental structure (minimized system).\n * **xvg_rgyr**: Radius of Gyration (RGyr) of the molecule throughout the whole *free* simulation step.\n* **Checkpoint file**: \n * **cpt**: GROMACS portable checkpoint file, allowing to restore (continue) the simulation from the last step of the setup process. \n \nPlease note that the name of the **output files** is sometimes fixed by a **specific input** (e.g. step10_energy_min_name), whereas when no specific name is given as input, the **default value** is used (e.g. system.tpr). **Default values** can be found in the **CWL description** files for each **building block** (biobb_adapters). ",
"_____no_output_____"
]
],
[
[
"outputs:\n trr:\n label: Trajectories - Raw trajectory\n doc: |\n Raw trajectory from the free simulation step\n type: File\n outputSource: step18_mdrun_md/output_trr_file\n \n trr_imaged_dry:\n label: Trajectories - Post-processed trajectory\n doc: |\n Post-processed trajectory, dehydrated, imaged (rotations and translations\n removed) and centered.\n type: File\n outputSource: step22_image/output_traj_file\n \n gro_dry:\n label: Resulting protein structure\n doc: |\n Resulting protein structure taken from the post-processed trajectory, to\n be used as a topology, usually for visualization purposes.\n type: File\n outputSource: step23_dry/output_str_file\n \n gro:\n label: Structures - Raw structure\n doc: |\n Raw structure from the free simulation step.\n type: File\n outputSource: step18_mdrun_md/output_gro_file\n\n cpt:\n label: Checkpoint file\n doc: |\n GROMACS portable checkpoint file, allowing to restore (continue) the\n simulation from the last step of the setup process.\n type: File\n outputSource: step18_mdrun_md/output_cpt_file\n\n tpr:\n label: Topologies GROMACS portable binary run\n doc: |\n GROMACS portable binary run input file, containing the starting structure\n of the simulation, the molecular topology and all the simulation parameters.\n type: File\n outputSource: step17_grompp_md/output_tpr_file\n\n top:\n label: GROMACS topology file\n doc: |\n GROMACS topology file, containing the molecular topology in an ASCII\n readable format.\n type: File\n outputSource: step7_genion/output_top_zip_file\n \n xvg_min:\n label: System Setup Observables - Potential Energy\n doc: |\n Potential energy of the system during the minimization step.\n type: File\n outputSource: step10_energy_min/output_xvg_file\n\n xvg_nvt:\n label: System Setup Observables - Temperature\n doc: |\n Temperature of the system during the NVT equilibration step.\n type: File\n outputSource: step13_energy_nvt/output_xvg_file\n \n xvg_npt:\n label: System Setup Observables - Pressure and density \n type: File\n outputSource: step16_energy_npt/output_xvg_file\n \n xvg_rmsfirst:\n label: Simulation Analysis\n doc: |\n Root Mean Square deviation (RMSd) throughout the whole free simulation\n step against the first snapshot of the trajectory (equilibrated system).\n type: File\n outputSource: step19_rmsfirst/output_xvg_file\n xvg_rmsexp:\n label: Simulation Analysis\n doc: |\n Root Mean Square deviation (RMSd) throughout the whole free simulation\n step against the experimental structure (minimized system).\n type: File\n outputSource: step20_rmsexp/output_xvg_file\n \n xvg_rgyr:\n label: Simulation Analysis\n doc: |\n Radius of Gyration (RGyr) of the molecule throughout the whole free simulation step\n type: File\n outputSource: step21_rgyr/output_xvg_file",
"_____no_output_____"
]
],
[
[
"<a id=\"mdworkflow\"></a>\n### Complete workflow:\n\nThe complete **CWL described workflow** to run a **Molecular Dynamics Setup** on a protein structure can be found in the next cell. The **representation of the workflow** using the **CWL Viewer** web service can be found here: XXXXXX. The **full workflow** is a combination of the **inputs**, **outputs** and **steps** revised in the previous cells. ",
"_____no_output_____"
]
],
[
[
"# Protein MD-Setup CWL workflow with BioExcel building blocks\n# https://github.com/bioexcel/biobb_wf_md_setup\n\n#!/usr/bin/env cwl-runner\n\ncwlVersion: v1.0\nclass: Workflow\ninputs:\n step1_pdb_name: string\n step1_pdb_config: string\n step4_editconf_config: string\n step6_gppion_config: string\n step7_genion_config: string\n step8_gppmin_config: string\n step10_energy_min_config: string\n step10_energy_min_name: string\n step11_gppnvt_config: string\n step13_energy_nvt_config: string\n step13_energy_nvt_name: string\n step14_gppnpt_config: string\n step16_energy_npt_config: string\n step16_energy_npt_name: string\n step17_gppmd_config: string\n step19_rmsfirst_config: string\n step19_rmsfirst_name: string\n step20_rmsexp_config: string\n step20_rmsexp_name: string\n step21_rgyr_config: string\n step22_image_config: string\n step23_dry_config: string\n\noutputs:\n trr:\n label: Trajectories - Raw trajectory\n doc: |\n Raw trajectory from the free simulation step\n type: File\n outputSource: step18_mdrun_md/output_trr_file\n \n trr_imaged_dry:\n label: Trajectories - Post-processed trajectory\n doc: |\n Post-processed trajectory, dehydrated, imaged (rotations and translations\n removed) and centered.\n type: File\n outputSource: step22_image/output_traj_file\n \n gro_dry:\n label: Resulting protein structure\n doc: |\n Resulting protein structure taken from the post-processed trajectory, to\n be used as a topology, usually for visualization purposes.\n type: File\n outputSource: step23_dry/output_str_file\n \n gro:\n label: Structures - Raw structure\n doc: |\n Raw structure from the free simulation step.\n type: File\n outputSource: step18_mdrun_md/output_gro_file\n\n cpt:\n label: Checkpoint file\n doc: |\n GROMACS portable checkpoint file, allowing to restore (continue) the\n simulation from the last step of the setup process.\n type: File\n outputSource: step18_mdrun_md/output_cpt_file\n\n tpr:\n label: Topologies GROMACS portable binary run\n doc: |\n GROMACS portable binary run input file, containing the starting structure\n of the simulation, the molecular topology and all the simulation parameters.\n type: File\n outputSource: step17_grompp_md/output_tpr_file\n\n top:\n label: GROMACS topology file\n doc: |\n GROMACS topology file, containing the molecular topology in an ASCII\n readable format.\n type: File\n outputSource: step7_genion/output_top_zip_file\n \n xvg_min:\n label: System Setup Observables - Potential Energy\n doc: |\n Potential energy of the system during the minimization step.\n type: File\n outputSource: step10_energy_min/output_xvg_file\n\n xvg_nvt:\n label: System Setup Observables - Temperature\n doc: |\n Temperature of the system during the NVT equilibration step.\n type: File\n outputSource: step13_energy_nvt/output_xvg_file\n \n xvg_npt:\n label: System Setup Observables - Pressure and density \n type: File\n outputSource: step16_energy_npt/output_xvg_file\n \n xvg_rmsfirst:\n label: Simulation Analysis\n doc: |\n Root Mean Square deviation (RMSd) throughout the whole free simulation\n step against the first snapshot of the trajectory (equilibrated system).\n type: File\n outputSource: step19_rmsfirst/output_xvg_file\n xvg_rmsexp:\n label: Simulation Analysis\n doc: |\n Root Mean Square deviation (RMSd) throughout the whole free simulation\n step against the experimental structure (minimized system).\n type: File\n outputSource: step20_rmsexp/output_xvg_file\n \n xvg_rgyr:\n label: Simulation Analysis\n doc: |\n Radius of Gyration (RGyr) of the molecule throughout the whole free simulation step\n type: File\n outputSource: step21_rgyr/output_xvg_file\n\nsteps:\n step1_pdb:\n label: Fetch PDB Structure\n doc: |\n Download a protein structure from the PDB database\n run: biobb/biobb_adapters/cwl/biobb_io/mmb_api/pdb.cwl\n in:\n output_pdb_path: step1_pdb_name\n config: step1_pdb_config\n out: [output_pdb_file]\n\n step2_fixsidechain:\n label: Fix Protein structure\n doc: |\n Fix the side chains, adding any side chain atoms missing in the\n original structure.\n run: biobb/biobb_adapters/cwl/biobb_model/model/fix_side_chain.cwl\n in:\n input_pdb_path: step1_pdb/output_pdb_file\n out: [output_pdb_file]\n\n step3_pdb2gmx:\n label: Create Protein System Topology\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/pdb2gmx.cwl\n in:\n input_pdb_path: step2_fixsidechain/output_pdb_file\n out: [output_gro_file, output_top_zip_file]\n\n step4_editconf:\n label: Create Solvent Box\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/editconf.cwl\n in:\n input_gro_path: step3_pdb2gmx/output_gro_file\n out: [output_gro_file]\n\n step5_solvate:\n label: Fill the Box with Water Molecules\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/solvate.cwl\n in:\n input_solute_gro_path: step4_editconf/output_gro_file\n input_top_zip_path: step3_pdb2gmx/output_top_zip_file\n out: [output_gro_file, output_top_zip_file]\n\n step6_grompp_genion:\n label: Add Ions - part 1\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/grompp.cwl\n in:\n config: step6_gppion_config\n input_gro_path: step5_solvate/output_gro_file\n input_top_zip_path: step5_solvate/output_top_zip_file\n out: [output_tpr_file]\n\n step7_genion:\n label: Add Ions - part 2\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/genion.cwl\n in:\n config: step7_genion_config\n input_tpr_path: step6_grompp_genion/output_tpr_file\n input_top_zip_path: step5_solvate/output_top_zip_file\n out: [output_gro_file, output_top_zip_file]\n\n step8_grompp_min:\n label: Energetically Minimize the System - part 1\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/grompp.cwl\n in:\n config: step8_gppmin_config\n input_gro_path: step7_genion/output_gro_file\n input_top_zip_path: step7_genion/output_top_zip_file\n out: [output_tpr_file]\n\n step9_mdrun_min:\n label: Energetically Minimize the System - part 2\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/mdrun.cwl\n in:\n input_tpr_path: step8_grompp_min/output_tpr_file\n out: [output_trr_file, output_gro_file, output_edr_file, output_log_file]\n\n step10_energy_min:\n label: Energetically Minimize the System - part 3\n run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_energy.cwl\n in:\n config: step10_energy_min_config\n output_xvg_path: step10_energy_min_name\n input_energy_path: step9_mdrun_min/output_edr_file\n out: [output_xvg_file]\n\n step11_grompp_nvt:\n label: Equilibrate the System (NVT) - part 1\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/grompp.cwl\n in:\n config: step11_gppnvt_config\n input_gro_path: step9_mdrun_min/output_gro_file\n input_top_zip_path: step7_genion/output_top_zip_file\n out: [output_tpr_file]\n\n step12_mdrun_nvt:\n label: Equilibrate the System (NVT) - part 2\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/mdrun.cwl\n in:\n input_tpr_path: step11_grompp_nvt/output_tpr_file\n out: [output_trr_file, output_gro_file, output_edr_file, output_log_file, output_cpt_file]\n\n step13_energy_nvt:\n label: Equilibrate the System (NVT) - part 3\n run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_energy.cwl\n in:\n config: step13_energy_nvt_config\n output_xvg_path: step13_energy_nvt_name\n input_energy_path: step12_mdrun_nvt/output_edr_file\n out: [output_xvg_file]\n\n step14_grompp_npt:\n label: Equilibrate the System (NPT) - part 1\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/grompp.cwl\n in:\n config: step14_gppnpt_config\n input_gro_path: step12_mdrun_nvt/output_gro_file\n input_top_zip_path: step7_genion/output_top_zip_file\n input_cpt_path: step12_mdrun_nvt/output_cpt_file\n out: [output_tpr_file]\n\n step15_mdrun_npt:\n label: Equilibrate the System (NPT) - part 2\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/mdrun.cwl\n in:\n input_tpr_path: step14_grompp_npt/output_tpr_file\n out: [output_trr_file, output_gro_file, output_edr_file, output_log_file, output_cpt_file]\n\n step16_energy_npt:\n label: Equilibrate the System (NPT) - part 3\n run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_energy.cwl\n in:\n config: step16_energy_npt_config\n output_xvg_path: step16_energy_npt_name\n input_energy_path: step15_mdrun_npt/output_edr_file\n out: [output_xvg_file]\n\n step17_grompp_md:\n label: Free Molecular Dynamics Simulation - part 1\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/grompp.cwl\n in:\n config: step17_gppmd_config\n input_gro_path: step15_mdrun_npt/output_gro_file\n input_top_zip_path: step7_genion/output_top_zip_file\n input_cpt_path: step15_mdrun_npt/output_cpt_file\n out: [output_tpr_file]\n\n step18_mdrun_md:\n label: Free Molecular Dynamics Simulation - part 2\n run: biobb/biobb_adapters/cwl/biobb_md/gromacs/mdrun.cwl\n in:\n input_tpr_path: step17_grompp_md/output_tpr_file\n out: [output_trr_file, output_gro_file, output_edr_file, output_log_file, output_cpt_file]\n\n step19_rmsfirst:\n label: Post-processing Resulting 3D Trajectory - part 1\n run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_rms.cwl\n in:\n config: step19_rmsfirst_config\n output_xvg_path: step19_rmsfirst_name\n input_structure_path: step17_grompp_md/output_tpr_file\n input_traj_path: step18_mdrun_md/output_trr_file\n out: [output_xvg_file]\n\n step20_rmsexp:\n label: Post-processing Resulting 3D Trajectory - part 2\n run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_rms.cwl\n in:\n config: step20_rmsexp_config\n output_xvg_path: step20_rmsexp_name\n input_structure_path: step8_grompp_min/output_tpr_file\n input_traj_path: step18_mdrun_md/output_trr_file\n out: [output_xvg_file]\n\n step21_rgyr:\n label: Post-processing Resulting 3D Trajectory - part 3\n run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_rgyr.cwl\n in:\n config: step21_rgyr_config\n input_structure_path: step8_grompp_min/output_tpr_file\n input_traj_path: step18_mdrun_md/output_trr_file\n out: [output_xvg_file]\n\n step22_image:\n label: Post-processing Resulting 3D Trajectory - part 4\n run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_image.cwl\n in:\n config: step22_image_config\n input_top_path: step17_grompp_md/output_tpr_file\n input_traj_path: step18_mdrun_md/output_trr_file\n out: [output_traj_file]\n\n step23_dry:\n label: Post-processing Resulting 3D Trajectory - part 5\n run: biobb/biobb_adapters/cwl/biobb_analysis/gromacs/gmx_trjconv_str.cwl\n in:\n config: step23_dry_config\n input_structure_path: step18_mdrun_md/output_gro_file\n input_top_path: step17_grompp_md/output_tpr_file\n out: [output_str_file]",
"_____no_output_____"
]
],
[
[
"<a id=\"mdrun\"></a>\n### Input of the run:\n\nAs previously stated, all **CWL workflows** are divided in **two files**: the **CWL description** and the **YAML** or **JSON** files containing **all workflow inputs**. The following cell presents the **YAML** file describing the **inputs of the run** for the **Protein Gromacs MD Setup** workflow.\n\nAll the steps were defined as *strings* in the **CWL workflow**; **Building blocks** inputs ending by \"*_name*\" contain a simple *string* with the wanted file name; **Building blocks** inputs ending by \"*_config*\" contain the **properties parameters** in a *string* reproducing a **JSON format**. Please note here that all double quotes in **JSON format** must be escaped. The **properties parameters** were taken from the original **Protein Gromacs MD Setup** workflow [Jupyter Notebook tutorial](https://github.com/bioexcel/biobb_wf_md_setup). Please refer to it to find information about the values used. ",
"_____no_output_____"
]
],
[
[
"# Protein MD-Setup CWL workflow with BioExcel building blocks - Input YAML configuration file\n# https://github.com/bioexcel/biobb_wf_md_setup\n\nstep1_pdb_name: 'tutorial.pdb'\nstep1_pdb_config: '{\"pdb_code\" : \"1aki\"}'\nstep4_editconf_config: '{\"box_type\": \"cubic\",\"distance_to_molecule\": 1.0}'\nstep6_gppion_config: '{\"mdp\": {\"type\":\"minimization\"}}'\nstep7_genion_config: '{\"neutral\": \"True\"}'\nstep8_gppmin_config: '{\"mdp\": {\"type\":\"minimization\", \"nsteps\":\"5000\", \"emtol\":\"500\"}}'\nstep10_energy_min_config: '{\"terms\": [\"Potential\"]}'\nstep10_energy_min_name: 'energy_min.xvg'\nstep11_gppnvt_config: '{\"mdp\": {\"type\":\"nvt\", \"nsteps\":\"5000\", \"dt\":0.002, \"define\":\"-DPOSRES\"}}'\nstep13_energy_nvt_config: '{\"terms\": [\"Temperature\"]}'\nstep13_energy_nvt_name: 'energy_nvt.xvg'\nstep14_gppnpt_config: '{\"mdp\": {\"type\":\"npt\", \"nsteps\":\"5000\"}}'\nstep16_energy_npt_config: '{\"terms\": [\"Pressure\",\"Density\"]}'\nstep16_energy_npt_name: 'energy_npt.xvg'\nstep17_gppmd_config: '{\"mdp\": {\"type\":\"free\", \"nsteps\":\"50000\"}}'\nstep19_rmsfirst_config: '{\"selection\": \"Backbone\"}'\nstep19_rmsfirst_name: 'rmsd_first.xvg'\nstep20_rmsexp_config: '{\"selection\": \"Backbone\"}'\nstep20_rmsexp_name: 'rmsd_exp.xvg'\nstep21_rgyr_config: '{\"selection\": \"Backbone\"}'\nstep22_image_config: '{\"center_selection\":\"Protein\",\"output_selection\":\"Protein\",\"pbc\":\"mol\"}'\nstep23_dry_config: '{\"selection\": \"Protein\"}'",
"_____no_output_____"
]
],
[
[
"<a id=\"mdcwlrun\"></a>\n### Running the CWL workflow:\n\nThe final step of the process is **running the workflow described in CWL**. For that, the complete **workflow description** should be written to a file (e.g. BioExcel-CWL-MDSetup.cwl), the **YAML** input should be written to a separate file (e.g. BioExcel-CWL-MDSetup-job.yml) and finally both files should be used with the **CWL tool description reference implementation executer** (cwltool). \n\nAs in the previous example, it is important to note that in order to properly run the **CWL workflow**, the **CWL descriptions** for all the **building blocks** used in the **workflow** should be accessible from the file system. In this example, all the **CWL descriptions** needed where downloaded from the [BioExcel building blocks adapters github repository](https://github.com/bioexcel/biobb_adapters/tree/master/biobb_adapters/cwl) to a folder named **biobb_adapters**. \n\nIt is worth to note that as this workflow is using different **BioExcel building block modules** (biobb_io, biobb_model, biobb_md and biobb_analysis), so the **Docker container** for each of the modules will be downloaded the first time that it is launched. This process **could take some time** (and **disk space**). Once all the **Docker containers** are correctly downloaded and integrated in the system, the **workflow** should take around 1h (depending on the machine used).\n\nThe **command line** is shown in the cell below:",
"_____no_output_____"
]
],
[
[
"# Run CWL workflow with CWL tool description reference implementation (cwltool).\ncwltool BioExcel-CWL-MDSetup.cwl BioExcel-CWL-MDSetup-job.yml",
"_____no_output_____"
]
],
[
[
"***\n<a id=\"questions\"></a>\n\n## Questions & Comments\n\nQuestions, issues, suggestions and comments are really welcome!\n\n* GitHub issues:\n * [https://github.com/bioexcel/biobb](https://github.com/bioexcel/biobb)\n\n* BioExcel forum:\n * [https://ask.bioexcel.eu/c/BioExcel-Building-Blocks-library](https://ask.bioexcel.eu/c/BioExcel-Building-Blocks-library)\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0d83ad0bbd3135831db7c6a26a1ad0cb68df14c | 14,290 | ipynb | Jupyter Notebook | .ipynb_checkpoints/FluxSat_paper_figure1-checkpoint.ipynb | MAB2COAPS/flux | 4954be9eea78f03f4378b661267a721664f0254d | [
"Apache-2.0"
] | 6 | 2019-05-04T01:29:28.000Z | 2021-10-06T15:10:20.000Z | .ipynb_checkpoints/FluxSat_paper_figure1-checkpoint.ipynb | MAB2COAPS/flux | 4954be9eea78f03f4378b661267a721664f0254d | [
"Apache-2.0"
] | null | null | null | .ipynb_checkpoints/FluxSat_paper_figure1-checkpoint.ipynb | MAB2COAPS/flux | 4954be9eea78f03f4378b661267a721664f0254d | [
"Apache-2.0"
] | 5 | 2019-05-04T12:49:42.000Z | 2021-01-31T13:59:22.000Z | 44.104938 | 113 | 0.612456 | [
[
[
"import xarray as xr\nimport matplotlib.pyplot as plt\nimport cartopy.crs as ccrs\n\nfrom scipy.io import loadmat\n\n\n#where to find the data\nadir= 'F:/data/fluxsat/WS_SST_Correlation/'\n\n#read in the data\nds1=xr.open_dataset(adir+'Corr_High_redone.nc')\nds1.close()\nds2=xr.open_dataset(adir+'Corr_Full.nc') #Full: corelation using unfiltered daily data: \nds2.close()\ntem = loadmat(adir+'fluxDifferences.mat')\nds_err = xr.Dataset({'err': (['lat', 'lon'], tem['combinedSD'].transpose())},\n coords={'lon': (['lon'], tem['longitude'][:,0]),\n 'lat': (['lat'], tem['latitude'][:,0])})",
"_____no_output_____"
],
[
"#scientific colormaps\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.colors import LinearSegmentedColormap\ncm_data = np.loadtxt(\"C:/Users/gentemann/Google Drive/d_drive/ScientificColourMaps6/vik/vik.txt\")\nvik_map = LinearSegmentedColormap.from_list(\"vik\", cm_data)\ncm_data = np.loadtxt(\"C:/Users/gentemann/Google Drive/d_drive/ScientificColourMaps6/roma/roma.txt\")\nroma_map = LinearSegmentedColormap.from_list(\"roma\", cm_data)\nroma_map2 = LinearSegmentedColormap.from_list(\"roma\", cm_data[-1::-1])",
"_____no_output_____"
],
[
"tem=xr.concat([ds2.sel(lon=slice(20,360)),ds2.sel(lon=slice(0,20))],dim='lon')\nfig = plt.figure(figsize=(12, 4))\nax = plt.axes(projection=ccrs.Mollweide(central_longitude=-160))\nax.stock_img()\nax.coastlines(resolution='50m', color='black', linewidth=1)\nax0=ax.pcolormesh(tem.lon,tem.lat,tem.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree())\nax1=ax.pcolormesh(tem.lon,tem.lat,tem.corrH,vmin=-1,vmax=1,cmap=vik_map,transform=ccrs.PlateCarree())\ncax=plt.colorbar(ax1,ax=ax, shrink=.6)\ncax.set_label('Correlation Coefficient')\naxt = plt.axes((.3, .8, .01, .01))\naxt.axis('off')\naxt.text(0,1.2,'a)',fontsize=16)\nfig.savefig(adir+'no_filter_wh.png')",
"_____no_output_____"
],
[
"tem=xr.concat([ds1.sel(lon=slice(20,360)),ds1.sel(lon=slice(0,20))],dim='lon')\nfig = plt.figure(figsize=(12, 4))\nax = plt.axes(projection=ccrs.Mollweide(central_longitude=-160))\nax.stock_img()\nax.coastlines(resolution='50m', color='black', linewidth=1)\nax0=ax.pcolormesh(tem.lon,tem.lat,tem.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree())\nax1=ax.pcolormesh(tem.lon,tem.lat,tem.corrH,vmin=-1,vmax=1,cmap=vik_map,transform=ccrs.PlateCarree())\ncax=plt.colorbar(ax1,ax=ax, shrink=.6)\ncax.set_label('Correlation Coefficient High Pass')\naxt = plt.axes((.3, .8, .01, .01))\naxt.axis('off')\naxt.text(0,1.2,'b)',fontsize=16)\nfig.savefig(adir+'high_pass_wh.png')",
"_____no_output_____"
],
[
"tem1=xr.concat([ds1.sel(lon=slice(20,360)),ds1.sel(lon=slice(0,20))],dim='lon')\ntem=xr.concat([ds_err.sel(lon=slice(20,360)),ds_err.sel(lon=slice(0,20))],dim='lon')\nfig = plt.figure(figsize=(12, 4))\nax = plt.axes(projection=ccrs.Mollweide(central_longitude=-160))\nax.stock_img()\nax.coastlines(resolution='50m', color='black', linewidth=1)\nax0=ax.pcolormesh(tem1.lon,tem1.lat,tem1.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree())\nax1=ax.pcolormesh(tem.lon,tem.lat,tem.err,vmin=0,vmax=30,cmap=roma_map2,transform=ccrs.PlateCarree())\ncax=plt.colorbar(ax1,ax=ax, shrink=.6)\ncax.set_label('Standard deviation (W m$^{-2}$)')\naxt = plt.axes((.3, .8, .01, .01))\naxt.axis('off')\naxt.text(0,1.2,'b)',fontsize=16)\nfig.savefig(adir+'err.png')",
"_____no_output_____"
],
[
"vv=.75\ntem=xr.concat([ds2.sel(lon=slice(20,360)),ds2.sel(lon=slice(0,20))],dim='lon')\nfig = plt.figure(figsize=(15, 8))\nax = plt.subplot(211,projection=ccrs.Mollweide(central_longitude=-160))\nax.stock_img()\nax.coastlines(resolution='50m', color='black', linewidth=1)\nax0=ax.pcolormesh(tem.lon,tem.lat,tem.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree())\nax1=ax.pcolormesh(tem.lon,tem.lat,tem.corrH,vmin=-vv,vmax=vv,cmap=vik_map,transform=ccrs.PlateCarree())\ncax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01)\ncax.set_label('Correlation Coefficient')\naxt = plt.axes((.4, .8, .01, .01))\naxt.axis('off')\naxt.text(0,1.2,'a)',fontsize=16)\n\ntem=xr.concat([ds1.sel(lon=slice(20,360)),ds1.sel(lon=slice(0,20))],dim='lon')\nax = plt.subplot(212,projection=ccrs.Mollweide(central_longitude=-160))\nax.stock_img()\nax.coastlines(resolution='50m', color='black', linewidth=1)\nax0=ax.pcolormesh(tem.lon,tem.lat,tem.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree())\nax1=ax.pcolormesh(tem.lon,tem.lat,tem.corrH,vmin=-vv,vmax=vv,cmap=vik_map,transform=ccrs.PlateCarree())\ncax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01)\ncax.set_label('Correlation Coefficient High Pass')\naxt = plt.axes((.4, .4, .01, .01))\naxt.axis('off')\naxt.text(0,1.2,'b)',fontsize=16)\nfig.savefig(adir+'both.png')\n",
"_____no_output_____"
],
[
"vv=.75\ntem=xr.concat([ds2.sel(lon=slice(20,360)),ds2.sel(lon=slice(0,20))],dim='lon')\nfig = plt.figure(figsize=(15, 12))\nax = plt.subplot(311,projection=ccrs.Mollweide(central_longitude=-160))\nax.stock_img()\nax.coastlines(resolution='50m', color='black', linewidth=1)\nax0=ax.pcolormesh(tem.lon,tem.lat,tem.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree())\nax1=ax.pcolormesh(tem.lon,tem.lat,tem.corrH,vmin=-vv,vmax=vv,cmap=vik_map,transform=ccrs.PlateCarree())\ncax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01)\ncax.set_label('Correlation Coefficient')\naxt = plt.axes((.4, .8, .01, .01))\naxt.axis('off')\naxt.text(0,1.2,'a)',fontsize=16)\n\ntem=xr.concat([ds1.sel(lon=slice(20,360)),ds1.sel(lon=slice(0,20))],dim='lon')\nax = plt.subplot(312,projection=ccrs.Mollweide(central_longitude=-160))\nax.stock_img()\nax.coastlines(resolution='50m', color='black', linewidth=1)\nax0=ax.pcolormesh(tem.lon,tem.lat,tem.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree())\nax1=ax.pcolormesh(tem.lon,tem.lat,tem.corrH,vmin=-vv,vmax=vv,cmap=vik_map,transform=ccrs.PlateCarree())\ncax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01)\ncax.set_label('Correlation Coefficient \\n High Pass')\naxt = plt.axes((.4, .53, .01, .01))\naxt.axis('off')\naxt.text(0,1.2,'b)',fontsize=16)\n\ntem1=xr.concat([ds1.sel(lon=slice(20,360)),ds1.sel(lon=slice(0,20))],dim='lon')\ntem=xr.concat([ds_err.sel(lon=slice(20,360)),ds_err.sel(lon=slice(0,20))],dim='lon')\nax = plt.subplot(313,projection=ccrs.Mollweide(central_longitude=-160))\nax.stock_img()\nax.coastlines(resolution='50m', color='black', linewidth=1)\nax0=ax.pcolormesh(tem1.lon,tem1.lat,tem1.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree())\nax1=ax.pcolormesh(tem.lon,tem.lat,tem.err,vmin=0,vmax=30,cmap=roma_map2,transform=ccrs.PlateCarree())\ncax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01)\ncax.set_label('Standard deviation (W m$^{-2}$)')\naxt = plt.axes((.4, .26, .01, .01))\naxt.axis('off')\naxt.text(0,1.2,'c)',fontsize=16)\n\nfig.savefig(adir+'ALL.png')",
"_____no_output_____"
],
[
"vv=.75\ntem=xr.concat([ds2.sel(lon=slice(20,360)),ds2.sel(lon=slice(0,20))],dim='lon')\nfig = plt.figure(figsize=(15, 12))\nax = plt.subplot(311,projection=ccrs.Mollweide(central_longitude=-160))\n#ax.stock_img()\nax.coastlines(resolution='50m', color='black', linewidth=1)\nax0=ax.pcolormesh(tem.lon,tem.lat,tem.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree())\nax1=ax.pcolormesh(tem.lon,tem.lat,tem.corrH,vmin=-vv,vmax=vv,cmap=vik_map,transform=ccrs.PlateCarree())\ncax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01)\ncax.set_label('Correlation Coefficient')\naxt = plt.axes((.4, .8, .01, .01))\naxt.axis('off')\naxt.text(0,1.2,'a)',fontsize=16)\n\ntem=xr.concat([ds1.sel(lon=slice(20,360)),ds1.sel(lon=slice(0,20))],dim='lon')\nax = plt.subplot(312,projection=ccrs.Mollweide(central_longitude=-160))\n#ax.stock_img()\nax.coastlines(resolution='50m', color='black', linewidth=1)\nax0=ax.pcolormesh(tem.lon,tem.lat,tem.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree())\nax1=ax.pcolormesh(tem.lon,tem.lat,tem.corrH,vmin=-vv,vmax=vv,cmap=vik_map,transform=ccrs.PlateCarree())\ncax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01)\ncax.set_label('Correlation Coefficient \\n High Pass')\naxt = plt.axes((.4, .53, .01, .01))\naxt.axis('off')\naxt.text(0,1.2,'b)',fontsize=16)\n\ntem1=xr.concat([ds1.sel(lon=slice(20,360)),ds1.sel(lon=slice(0,20))],dim='lon')\ntem=xr.concat([ds_err.sel(lon=slice(20,360)),ds_err.sel(lon=slice(0,20))],dim='lon')\nax = plt.subplot(313,projection=ccrs.Mollweide(central_longitude=-160))\n#ax.stock_img()\nax.coastlines(resolution='50m', color='black', linewidth=1)\nax0=ax.pcolormesh(tem1.lon,tem1.lat,tem1.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree())\nax1=ax.pcolormesh(tem.lon,tem.lat,tem.err,vmin=0,vmax=30,cmap=roma_map2,transform=ccrs.PlateCarree())\ncax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01)\ncax.set_label('Standard deviation (W m$^{-2}$)')\naxt = plt.axes((.4, .26, .01, .01))\naxt.axis('off')\naxt.text(0,1.2,'c)',fontsize=16)\n\nfig.savefig(adir+'ALL_whiteland.png')\n",
"_____no_output_____"
],
[
"import cartopy.feature as cfeature\nvv=.75\ntem=xr.concat([ds2.sel(lon=slice(20,360)),ds2.sel(lon=slice(0,20))],dim='lon')\nfig = plt.figure(figsize=(15, 12))\nax = plt.subplot(311,projection=ccrs.Mollweide(central_longitude=-160))\n#ax.stock_img()\nax.add_feature(cfeature.LAND,facecolor='grey')\nax.coastlines(resolution='50m', color='black', linewidth=1)\nax0=ax.pcolormesh(tem.lon,tem.lat,tem.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree())\nax1=ax.pcolormesh(tem.lon,tem.lat,tem.corrH,vmin=-vv,vmax=vv,cmap=vik_map,transform=ccrs.PlateCarree())\ncax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01)\ncax.set_label('Correlation Coefficient')\naxt = plt.axes((.4, .8, .01, .01))\naxt.axis('off')\naxt.text(0,1.2,'a)',fontsize=16)\n\ntem=xr.concat([ds1.sel(lon=slice(20,360)),ds1.sel(lon=slice(0,20))],dim='lon')\nax = plt.subplot(312,projection=ccrs.Mollweide(central_longitude=-160))\n#ax.stock_img()\nax.add_feature(cfeature.LAND,facecolor='grey')\nax.coastlines(resolution='50m', color='black', linewidth=1)\nax0=ax.pcolormesh(tem.lon,tem.lat,tem.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree())\nax1=ax.pcolormesh(tem.lon,tem.lat,tem.corrH,vmin=-vv,vmax=vv,cmap=vik_map,transform=ccrs.PlateCarree())\ncax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01)\ncax.set_label('Correlation Coefficient \\n High Pass')\naxt = plt.axes((.4, .53, .01, .01))\naxt.axis('off')\naxt.text(0,1.2,'b)',fontsize=16)\n\ntem1=xr.concat([ds1.sel(lon=slice(20,360)),ds1.sel(lon=slice(0,20))],dim='lon')\ntem=xr.concat([ds_err.sel(lon=slice(20,360)),ds_err.sel(lon=slice(0,20))],dim='lon')\nax = plt.subplot(313,projection=ccrs.Mollweide(central_longitude=-160))\n#ax.stock_img()\nax.add_feature(cfeature.LAND,facecolor='grey')\nax.coastlines(resolution='50m', color='black', linewidth=1)\nax0=ax.pcolormesh(tem1.lon,tem1.lat,tem1.mask,vmin=0,vmax=10,cmap='binary',transform=ccrs.PlateCarree())\nax1=ax.pcolormesh(tem.lon,tem.lat,tem.err,vmin=0,vmax=30,cmap=roma_map2,transform=ccrs.PlateCarree())\ncax=plt.colorbar(ax1,ax=ax, shrink=.6, pad=0.01)\ncax.set_label('Standard deviation (W m$^{-2}$)')\naxt = plt.axes((.4, .26, .01, .01))\naxt.axis('off')\naxt.text(0,1.2,'c)',fontsize=16)\n\nfig.savefig(adir+'ALL_greyland.png')\n\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0d84f85a7b997650c42e2baa5c0a987f08c4421 | 127,578 | ipynb | Jupyter Notebook | notebooks/copy-task-plots.ipynb | arthurlemon/NTM | 53ea2ba4945b98d88226166b7818dfdfdd661a6e | [
"BSD-3-Clause"
] | 1 | 2022-03-22T20:29:54.000Z | 2022-03-22T20:29:54.000Z | notebooks/copy-task-plots.ipynb | zhongyuchen/pytorch-ntm | d0954cdf8ac6ecb75ea70fb1e85c29b3c1e07499 | [
"BSD-3-Clause"
] | null | null | null | notebooks/copy-task-plots.ipynb | zhongyuchen/pytorch-ntm | d0954cdf8ac6ecb75ea70fb1e85c29b3c1e07499 | [
"BSD-3-Clause"
] | 1 | 2021-07-16T17:04:14.000Z | 2021-07-16T17:04:14.000Z | 277.947712 | 78,482 | 0.911364 | [
[
[
"# Copy Task Plots",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\nfrom glob import glob\nimport json\nimport os\nimport sys\nsys.path.append(os.path.abspath(os.getcwd() + \"./../\"))\n\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Load training history\n\nTo generate the models and training history used in this notebook, run the following commands:\n\n```\nmkdir ./notebooks/copy\n./train.py --seed 1 --task copy --checkpoint-interval 500 --checkpoint-path ./notebooks/copy\n./train.py --seed 10 --task copy --checkpoint-interval 500 --checkpoint-path ./notebooks/copy\n./train.py --seed 100 --task copy --checkpoint-interval 500 --checkpoint-path ./notebooks/copy\n./train.py --seed 1000 --task copy --checkpoint-interval 500 --checkpoint-path ./notebooks/copy\n```",
"_____no_output_____"
]
],
[
[
"batch_num = 40000\nfiles = glob(\"./copy/*-{}.json\".format(batch_num))\nfiles",
"_____no_output_____"
],
[
"# Read the metrics from the .json files\nhistory = [json.loads(open(fname, \"rt\").read()) for fname in files]\ntraining = np.array([(x['cost'], x['loss'], x['seq_lengths']) for x in history])\nprint(\"Training history (seed x metric x sequence) =\", training.shape)",
"Training history (seed x metric x sequence) = (4, 3, 40000)\n"
],
[
"# Average every dv values across each (seed, metric)\ndv = 1000\ntraining = training.reshape(len(files), 3, -1, dv).mean(axis=3)\nprint(training.shape)",
"(4, 3, 40)\n"
],
[
"# Average the seeds\ntraining_mean = training.mean(axis=0)\ntraining_std = training.std(axis=0)\nprint(training_mean.shape)\nprint(training_std.shape)",
"(3, 40)\n(3, 40)\n"
],
[
"fig = plt.figure(figsize=(12, 5))\n\n# X axis is normalized to thousands\nx = np.arange(dv / 1000, (batch_num / 1000) + (dv / 1000), dv / 1000)\n\n# Plot the cost\n# plt.plot(x, training_mean[0], 'o-', linewidth=2, label='Cost')\nplt.errorbar(x, training_mean[0], yerr=training_std[0], fmt='o-', elinewidth=2, linewidth=2, label='Cost')\nplt.grid()\nplt.yticks(np.arange(0, training_mean[0][0]+5, 5))\nplt.ylabel('Cost per sequence (bits)')\nplt.xlabel('Sequence (thousands)')\nplt.title('Training Convergence', fontsize=16)\n\nax = plt.axes([.57, .55, .25, .25], facecolor=(0.97, 0.97, 0.97))\nplt.title(\"BCELoss\")\nplt.plot(x, training_mean[1], 'r-', label='BCE Loss')\nplt.yticks(np.arange(0, training_mean[1][0]+0.2, 0.2))\nplt.grid()\n\nplt.show()",
"_____no_output_____"
],
[
"loss = history[3]['loss']\ncost = history[3]['cost']\nseq_lengths = history[3]['seq_lengths']\n\nunique_sls = set(seq_lengths)\nall_metric = list(zip(range(1, batch_num+1), seq_lengths, loss, cost))\n\nfig = plt.figure(figsize=(12, 5))\nplt.ylabel('Cost per sequence (bits)')\nplt.xlabel('Iteration (thousands)')\nplt.title('Training Convergence (Per Sequence Length)', fontsize=16)\n\nfor sl in unique_sls:\n sl_metrics = [i for i in all_metric if i[1] == sl]\n\n x = [i[0] for i in sl_metrics]\n y = [i[3] for i in sl_metrics]\n \n num_pts = len(x) // 50\n total_pts = num_pts * 50\n \n x_mean = [i.mean()/1000 for i in np.split(np.array(x)[:total_pts], num_pts)]\n y_mean = [i.mean() for i in np.split(np.array(y)[:total_pts], num_pts)]\n \n plt.plot(x_mean, y_mean, label='Seq-{}'.format(sl))\n\nplt.yticks(np.arange(0, 80, 5))\nplt.legend(loc=0)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Evaluate",
"_____no_output_____"
]
],
[
[
"import torch\nfrom IPython.display import Image as IPythonImage\nfrom PIL import Image, ImageDraw, ImageFont\nimport io\nfrom tasks.copytask import dataloader\nfrom train import evaluate",
"_____no_output_____"
],
[
"from tasks.copytask import CopyTaskModelTraining\nmodel = CopyTaskModelTraining()",
"_____no_output_____"
],
[
"model.net.load_state_dict(torch.load(\"./copy/copy-task-10-batch-40000.model\"))",
"_____no_output_____"
],
[
"seq_len = 60\n_, x, y = next(iter(dataloader(1, 1, 8, seq_len, seq_len)))\nresult = evaluate(model.net, model.criterion, x, y)\ny_out = result['y_out']",
"_____no_output_____"
],
[
"def cmap(value):\n pixval = value * 255\n low = 64\n high = 240\n factor = (255 - low - (255-high)) / 255\n return int(low + pixval * factor)\n\ndef draw_sequence(y, u=12):\n seq_len = y.size(0)\n seq_width = y.size(2)\n inset = u // 8\n pad = u // 2\n width = seq_len * u + 2 * pad\n height = seq_width * u + 2 * pad\n im = Image.new('L', (width, height))\n draw = ImageDraw.ImageDraw(im)\n draw.rectangle([0, 0, width, height], fill=250)\n for i in range(seq_len):\n for j in range(seq_width):\n val = 1 - y[i, 0, j].data[0]\n draw.rectangle([pad + i*u + inset,\n pad + j*u + inset,\n pad + (i+1)*u - inset,\n pad + (j+1)*u - inset], fill=cmap(val))\n\n return im\n\ndef im_to_png_bytes(im):\n png = io.BytesIO()\n im.save(png, 'PNG')\n return bytes(png.getbuffer())\n\ndef im_vconcat(im1, im2, pad=8):\n assert im1.size == im2.size\n w, h = im1.size\n\n width = w\n height = h * 2 + pad\n\n im = Image.new('L', (width, height), color=255)\n im.paste(im1, (0, 0))\n im.paste(im2, (0, h+pad))\n return im",
"_____no_output_____"
],
[
"def make_eval_plot(y, y_out, u=12):\n im_y = draw_sequence(y, u)\n im_y_out = draw_sequence(y_out, u)\n im = im_vconcat(im_y, im_y_out, u//2)\n \n w, h = im.size\n pad_w = u * 7\n im2 = Image.new('L', (w+pad_w, h), color=255)\n im2.paste(im, (pad_w, 0))\n \n # Add text\n font = ImageFont.truetype(\"./fonts/PT_Sans-Web-Regular.ttf\", 13)\n draw = ImageDraw.ImageDraw(im2)\n draw.text((u,4*u), \"Targets\", font=font)\n draw.text((u,13*u), \"Outputs\", font=font)\n \n return im2\n\nim = make_eval_plot(y, y_out, u=8)\nIPythonImage(im_to_png_bytes(im))",
"_____no_output_____"
]
],
[
[
"## Create an animated GIF\n\nLets see how the prediction looks like in each checkpoint that we saved. ",
"_____no_output_____"
]
],
[
[
"seq_len = 80\n_, x, y = next(iter(dataloader(1, 1, 8, seq_len, seq_len)))\n\nframes = []\nfont = ImageFont.truetype(\"./fonts/PT_Sans-Web-Regular.ttf\", 13)\nfor batch_num in range(500, 10500, 500):\n model = CopyTaskModelTraining()\n model.net.load_state_dict(torch.load(\"./copy/copy-task-10-batch-{}.model\".format(batch_num)))\n result = evaluate(model.net, model.criterion, x, y)\n y_out = result['y_out']\n frame = make_eval_plot(y, y_out, u=10)\n \n w, h = frame.size\n frame_seq = Image.new('L', (w, h+40), color=255)\n frame_seq.paste(frame, (0, 40))\n \n draw = ImageDraw.ImageDraw(frame_seq)\n draw.text((10, 10), \"Sequence Num: {} (Cost: {})\".format(batch_num, result['cost']), font=font)\n \n frames += [frame_seq]",
"_____no_output_____"
],
[
"im = frames[0]\nim.save(\"./copy-train-80.gif\", save_all=True, append_images=frames[1:], loop=0, duration=1000)\n\nim = frames[0]\nim.save(\"./copy-train-80-fast.gif\", save_all=True, append_images=frames[1:], loop=0, duration=100)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0d852c7fe58d5eaad8c0ae5557b39ddcf9e1bd9 | 27,414 | ipynb | Jupyter Notebook | module06-cleaning.data-day3.ipynb | deepcloudlabs/dcl702-2021-jul-12 | 1b8442ddfb507f4bdc5fc3e2d1c9188c26f1834b | [
"MIT"
] | null | null | null | module06-cleaning.data-day3.ipynb | deepcloudlabs/dcl702-2021-jul-12 | 1b8442ddfb507f4bdc5fc3e2d1c9188c26f1834b | [
"MIT"
] | null | null | null | module06-cleaning.data-day3.ipynb | deepcloudlabs/dcl702-2021-jul-12 | 1b8442ddfb507f4bdc5fc3e2d1c9188c26f1834b | [
"MIT"
] | null | null | null | 27,414 | 27,414 | 0.619428 | [
[
[
"import numpy as np\nimport pandas as pd",
"_____no_output_____"
],
[
"df1 = pd.DataFrame({\n 'food': ['bacon', 'pulled port', 'bacon', 'Pastrami', 'corned beef', 'Bacon', 'pastrami'],\n 'ounces': [4, 3, 12, 6, 7.5, 8, 3]\n})",
"_____no_output_____"
],
[
"df1",
"_____no_output_____"
],
[
"lowercased_food = df1['food'].str.lower()",
"_____no_output_____"
],
[
"meat_to_animal = {\n 'bacon': 'pig',\n 'pulled port': 'pig',\n 'pastrami': 'cow',\n 'corned beef': 'cow'\n}",
"_____no_output_____"
],
[
"df1['animal'] = lowercased_food.map(meat_to_animal)",
"_____no_output_____"
],
[
"df1",
"_____no_output_____"
],
[
"del df1['animal']",
"_____no_output_____"
],
[
"df1['animal'] = df1['food'].map(lambda fd : meat_to_animal[fd.lower()])",
"_____no_output_____"
],
[
"df1",
"_____no_output_____"
],
[
"ts1 = pd.Series([1., -999, 2., -999., -1000, 3.])",
"_____no_output_____"
],
[
"ts1",
"_____no_output_____"
],
[
"ts1.replace([-999.,-1000.0], [np.nan, 0.])",
"_____no_output_____"
],
[
"ts1.replace({\n -999.0: np.nan,\n -1000.0: 0\n})",
"_____no_output_____"
],
[
"ages = [20, 22, 25, 27, 21, 23, 37, 31, 61, 48, 41, 32]",
"_____no_output_____"
],
[
"bins = [18, 25, 35, 60, 100]",
"_____no_output_____"
],
[
"cats = pd.cut(ages, bins)",
"_____no_output_____"
],
[
"cats",
"_____no_output_____"
],
[
"cats.codes",
"_____no_output_____"
],
[
"cats.categories",
"_____no_output_____"
],
[
"df2 = pd.DataFrame(np.random.randn(1000,4))",
"_____no_output_____"
],
[
"df2.describe()",
"_____no_output_____"
],
[
"col_zero = df2[0]",
"_____no_output_____"
],
[
"del col_zero[np.abs(col_zero) > 3]",
"_____no_output_____"
],
[
"meyveler = [\" elma \", \"\\t\\tarmut\\t\\n\\n\\n\", \" kiraz\\n\\n\\n\", \"karpuz\", \"muz\", \"seftali\", \"kivi\", \"ananas\"]",
"_____no_output_____"
],
[
"print(meyveler)",
"[' elma ', '\\t\\tarmut\\t\\n\\n\\n', ' kiraz\\n\\n\\n', 'karpuz', 'muz', 'seftali', 'kivi', 'ananas']\n"
],
[
"fruits = [meyve.strip() for meyve in meyveler]",
"_____no_output_____"
],
[
"fruits",
"_____no_output_____"
],
[
"import re",
"_____no_output_____"
],
[
"pattern1 = r\"^k[a-z]*z$\" # ., *, +, ?, ^, $ , {}, [^ai]\npattern2 = r\"^[a-z]{6,}$\"\npattern3 = r\"^[^ai]+$\"\nreg1 = re.compile(pattern3, flags= re.IGNORECASE)",
"_____no_output_____"
],
[
"for fruit in fruits:\n if reg1.match(fruit):\n print(fruit)",
"muz\n"
],
[
"ts2 = pd.Series({\n 'jack': '[email protected]',\n 'kate': '[email protected]',\n 'james': '[email protected]',\n 'hugo' : np.nan\n})",
"_____no_output_____"
],
[
"ts2",
"_____no_output_____"
],
[
"ts2.isnull()",
"_____no_output_____"
],
[
"ts2.str.contains('gmail')",
"_____no_output_____"
],
[
"pattern=r'[a-z]{5}@.*\\.com'",
"_____no_output_____"
],
[
"ts2.str.findall(pattern, flags= re.IGNORECASE)",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0d86d1c352e54b94f63750e24287bd381ecf773 | 124,529 | ipynb | Jupyter Notebook | notebooks/labs38_notebooks/FrankenBERT_Training.ipynb | n8mcdunna/labs31_gp2 | 8d56eba9089adaccbd0fa6d31b8c7f056d477e80 | [
"MIT"
] | 3 | 2021-05-14T16:01:58.000Z | 2021-08-09T20:20:54.000Z | notebooks/labs38_notebooks/FrankenBERT_Training.ipynb | BloomTech-Labs/human-rights-first-police-ds-a | 8d56eba9089adaccbd0fa6d31b8c7f056d477e80 | [
"MIT"
] | 52 | 2021-04-15T17:33:16.000Z | 2021-10-04T21:12:52.000Z | notebooks/labs38_notebooks/FrankenBERT_Training.ipynb | BloomTech-Labs/human-rights-first-police-ds-a | 8d56eba9089adaccbd0fa6d31b8c7f056d477e80 | [
"MIT"
] | 30 | 2021-03-10T20:30:26.000Z | 2021-10-15T15:08:29.000Z | 62.171243 | 24,978 | 0.566213 | [
[
[
"# Imports & Installations",
"_____no_output_____"
]
],
[
[
"!pip install pyforest\n!pip install plotnine \n!pip install transformers\n!pip install psycopg2-binary\n!pip uninstall -y tensorflow-datasets\n!pip install lit_nlp tfds-nightly transformers==4.1.1",
"Requirement already satisfied: pyforest in /usr/local/lib/python3.7/dist-packages (1.1.0)\nRequirement already satisfied: plotnine in /usr/local/lib/python3.7/dist-packages (0.6.0)\nRequirement already satisfied: mizani>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from plotnine) (0.6.0)\nRequirement already satisfied: descartes>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from plotnine) (1.1.0)\nRequirement already satisfied: numpy>=1.16.0 in /usr/local/lib/python3.7/dist-packages (from plotnine) (1.19.5)\nRequirement already satisfied: patsy>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from plotnine) (0.5.1)\nRequirement already satisfied: statsmodels>=0.9.0 in /usr/local/lib/python3.7/dist-packages (from plotnine) (0.10.2)\nRequirement already satisfied: pandas>=0.25.0 in /usr/local/lib/python3.7/dist-packages (from plotnine) (1.1.5)\nRequirement already satisfied: matplotlib>=3.1.1 in /usr/local/lib/python3.7/dist-packages (from plotnine) (3.2.2)\nRequirement already satisfied: scipy>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from plotnine) (1.4.1)\nRequirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=3.1.1->plotnine) (2.4.7)\nRequirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=3.1.1->plotnine) (0.10.0)\nRequirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=3.1.1->plotnine) (2.8.2)\nRequirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=3.1.1->plotnine) (1.3.2)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from cycler>=0.10->matplotlib>=3.1.1->plotnine) (1.15.0)\nRequirement already satisfied: palettable in /usr/local/lib/python3.7/dist-packages (from mizani>=0.6.0->plotnine) (3.3.0)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.25.0->plotnine) (2018.9)\nRequirement already satisfied: transformers in /usr/local/lib/python3.7/dist-packages (4.1.1)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from transformers) (1.19.5)\nRequirement already satisfied: sacremoses in /usr/local/lib/python3.7/dist-packages (from transformers) (0.0.45)\nRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.7/dist-packages (from transformers) (4.62.2)\nRequirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers) (2.23.0)\nRequirement already satisfied: tokenizers==0.9.4 in /usr/local/lib/python3.7/dist-packages (from transformers) (0.9.4)\nRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (2019.12.20)\nRequirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers) (3.0.12)\nRequirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from transformers) (21.0)\nRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->transformers) (2.4.7)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2.10)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2021.5.30)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (3.0.4)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (1.24.3)\nRequirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (7.1.2)\nRequirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (1.0.1)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (1.15.0)\nRequirement already satisfied: psycopg2-binary in /usr/local/lib/python3.7/dist-packages (2.9.1)\n\u001b[33mWARNING: Skipping tensorflow-datasets as it is not installed.\u001b[0m\nRequirement already satisfied: lit_nlp in /usr/local/lib/python3.7/dist-packages (0.3)\nRequirement already satisfied: tfds-nightly in /usr/local/lib/python3.7/dist-packages (4.4.0.dev202109200107)\nRequirement already satisfied: transformers==4.1.1 in /usr/local/lib/python3.7/dist-packages (4.1.1)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from transformers==4.1.1) (1.19.5)\nRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.7/dist-packages (from transformers==4.1.1) (4.62.2)\nRequirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from transformers==4.1.1) (21.0)\nRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers==4.1.1) (2019.12.20)\nRequirement already satisfied: tokenizers==0.9.4 in /usr/local/lib/python3.7/dist-packages (from transformers==4.1.1) (0.9.4)\nRequirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers==4.1.1) (3.0.12)\nRequirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers==4.1.1) (2.23.0)\nRequirement already satisfied: sacremoses in /usr/local/lib/python3.7/dist-packages (from transformers==4.1.1) (0.0.45)\nRequirement already satisfied: scikit-learn in /usr/local/lib/python3.7/dist-packages (from lit_nlp) (0.22.2.post1)\nRequirement already satisfied: Werkzeug in /usr/local/lib/python3.7/dist-packages (from lit_nlp) (1.0.1)\nRequirement already satisfied: ml-collections in /usr/local/lib/python3.7/dist-packages (from lit_nlp) (0.1.0)\nRequirement already satisfied: pandas in /usr/local/lib/python3.7/dist-packages (from lit_nlp) (1.1.5)\nRequirement already satisfied: sacrebleu in /usr/local/lib/python3.7/dist-packages (from lit_nlp) (2.0.0)\nRequirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from lit_nlp) (1.4.1)\nRequirement already satisfied: umap-learn in /usr/local/lib/python3.7/dist-packages (from lit_nlp) (0.5.1)\nRequirement already satisfied: absl-py in /usr/local/lib/python3.7/dist-packages (from lit_nlp) (0.12.0)\nRequirement already satisfied: attrs in /usr/local/lib/python3.7/dist-packages (from lit_nlp) (21.2.0)\nRequirement already satisfied: portpicker in /usr/local/lib/python3.7/dist-packages (from lit_nlp) (1.3.9)\nRequirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from tfds-nightly) (0.16.0)\nRequirement already satisfied: dill in /usr/local/lib/python3.7/dist-packages (from tfds-nightly) (0.3.4)\nRequirement already satisfied: importlib-resources in /usr/local/lib/python3.7/dist-packages (from tfds-nightly) (5.2.2)\nRequirement already satisfied: tensorflow-metadata in /usr/local/lib/python3.7/dist-packages (from tfds-nightly) (1.2.0)\nRequirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from tfds-nightly) (3.7.4.3)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from tfds-nightly) (1.15.0)\nRequirement already satisfied: promise in /usr/local/lib/python3.7/dist-packages (from tfds-nightly) (2.3)\nRequirement already satisfied: protobuf>=3.12.2 in /usr/local/lib/python3.7/dist-packages (from tfds-nightly) (3.17.3)\nRequirement already satisfied: termcolor in /usr/local/lib/python3.7/dist-packages (from tfds-nightly) (1.1.0)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->transformers==4.1.1) (1.24.3)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers==4.1.1) (3.0.4)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers==4.1.1) (2.10)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers==4.1.1) (2021.5.30)\nRequirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.7/dist-packages (from importlib-resources->tfds-nightly) (3.5.0)\nRequirement already satisfied: PyYAML in /usr/local/lib/python3.7/dist-packages (from ml-collections->lit_nlp) (5.4.1)\nRequirement already satisfied: contextlib2 in /usr/local/lib/python3.7/dist-packages (from ml-collections->lit_nlp) (0.5.5)\nRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->transformers==4.1.1) (2.4.7)\nRequirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas->lit_nlp) (2.8.2)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas->lit_nlp) (2018.9)\nRequirement already satisfied: tabulate>=0.8.9 in /usr/local/lib/python3.7/dist-packages (from sacrebleu->lit_nlp) (0.8.9)\nRequirement already satisfied: colorama in /usr/local/lib/python3.7/dist-packages (from sacrebleu->lit_nlp) (0.4.4)\nRequirement already satisfied: portalocker in /usr/local/lib/python3.7/dist-packages (from sacrebleu->lit_nlp) (2.3.2)\nRequirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers==4.1.1) (7.1.2)\nRequirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers==4.1.1) (1.0.1)\nRequirement already satisfied: googleapis-common-protos<2,>=1.52.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-metadata->tfds-nightly) (1.53.0)\nRequirement already satisfied: numba>=0.49 in /usr/local/lib/python3.7/dist-packages (from umap-learn->lit_nlp) (0.51.2)\nRequirement already satisfied: pynndescent>=0.5 in /usr/local/lib/python3.7/dist-packages (from umap-learn->lit_nlp) (0.5.4)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from numba>=0.49->umap-learn->lit_nlp) (57.4.0)\nRequirement already satisfied: llvmlite<0.35,>=0.34.0.dev0 in /usr/local/lib/python3.7/dist-packages (from numba>=0.49->umap-learn->lit_nlp) (0.34.0)\n"
],
[
"# Automatic library importer (doesn't quite import everything yet)\nfrom pyforest import *\n# Expands Dataframe to view entire pandas dataframe\npd.options.display.max_colwidth = 750\n# For tracking the duration of executed code cells\nfrom time import time\n# To connect to Blue Witness Labeler's DB\nimport psycopg2\n# For visualizations\nfrom plotnine import *\nfrom plotnine.data import mpg\nimport plotly.graph_objects as go\nfrom plotly.subplots import make_subplots\n# For BERT model\nimport torch\nfrom torch.utils.data import TensorDataset, DataLoader, RandomSampler\nfrom transformers import BertTokenizer, BertForSequenceClassification, AdamW\nfrom transformers import get_linear_schedule_with_warmup\nfrom tensorflow.keras.preprocessing.sequence import pad_sequences",
"_____no_output_____"
]
],
[
[
"# Reading in our Tweets\n\n",
"_____no_output_____"
]
],
[
[
"def get_df(db_url) -> pd.DataFrame:\n '''\n Connects to our Blue Witness Data Labeler and retrieves manually labelled text before converting them all into a pandas dataframe.\n \n Parameters\n ----------\n db_url: psycopg2 database\n \n Returns\n -------\n df: pandas datafarme \n Contains thousands of text with appropriate police (non-)violence labels\n '''\n conn = psycopg2.connect(db_url)\n curs = conn.cursor()\n curs.execute(\"SELECT * FROM training;\")\n cols = [k[0] for k in curs.description]\n rows = curs.fetchall()\n df = pd.DataFrame(rows, columns=cols)\n curs.close()\n conn.close()\n return df",
"_____no_output_____"
],
[
"# ALWAYS REMEMBER TO REMOVE THE PostgreSQL URL ASSIGNED TO THIS VARIABLE WHEN COMITTING TO OUR REPO\ndb_url = \"\"",
"_____no_output_____"
],
[
"data_labeler_df = get_df(db_url)\ndata_labeler_df",
"_____no_output_____"
],
[
"def rank_wrangle():\n '''\n Loads in both synthetic tweets generated from GPT-2 and authentic tweets scraped and manually labelled from Twitter.\n Combines both sets of tweets together into a single dataframe.\n Drops any null values and duplicates.\n \n rank2_syn.txt, rank3_syn.txt, and rank4_syn.txt can be found in notebooks/labs37_notebooks/synthetic_tweets\n \n Parameters \n ----------\n None\n\n Returns\n -------\n df: pandas dataframe\n Contains fully concatenated dataframe\n '''\n # Supplying our dataframes with proper labels\n column_headers = ['tweets', 'labels']\n # Reading in our three police force rank datasets\n synthetic_tweets_cop_shot = pd.read_csv(\"/content/cop_shot_syn.txt\", sep = '/', names=column_headers)\n synthetic_tweets_run_over = pd.read_csv(\"/content/run_over_syn.txt\", sep = '/', names=column_headers)\n synthetic_tweets_rank2 = pd.read_csv(\"/content/rank2_syn.txt\", sep = '/', names=column_headers)\n synthetic_tweets_rank3 = pd.read_csv(\"/content/rank3_syn.txt\", sep = '/', names=column_headers)\n synthetic_tweets_rank4 = pd.read_csv(\"/content/rank4_syn.txt\", sep = '/', names=column_headers)\n # Concatenating all of our datasets into one\n compiled = pd.concat([data_labeler_df, synthetic_tweets_cop_shot, synthetic_tweets_run_over, synthetic_tweets_rank2, synthetic_tweets_rank3, synthetic_tweets_rank4])\n # Dropping unnecessary column\n compiled.drop('id', axis=1, inplace=True)\n # Discarding generated duplicates from GPT-2 while keeping the original Tweets\n compiled.drop_duplicates(subset='tweets', keep='first', inplace=True)\n # Dropping any possible NaNs\n if compiled.isnull().values.any():\n compiled.dropna(how='any', inplace=True)\n\n return compiled",
"_____no_output_____"
],
[
"# Applying our function above to view the contents of our dataframe\nforce_ranks = rank_wrangle()\nforce_ranks",
"_____no_output_____"
]
],
[
[
"# Visualizations",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\n(ggplot(force_ranks) # defining what dataframe to use\n + aes(x='labels') # defining what variable/column to use\n + geom_bar(size=20) # defining the type of plot to use and its size\n + labs(title='Number of Tweets Reporting Police Violence per Force Rank', x='Force Rank', y='Number of Tweets')\n)",
"_____no_output_____"
],
[
"# Creating custom donut chart with Plotly\nlabels = ['0 - No Police Presence', '5 - Lethal Force (Guns & Explosives)', '1 - Non-violent Police Presence', '3 - Blunt Force Trauma (Batons & Shields)', '4 - Chemical & Electric Weapons (Tasers & Pepper Spray)', '2 - Open Handed (Arm Holds & Pushing)']\nvalues = force_ranks.labels.value_counts()\n\nbw_colors = ['rgb(138, 138, 144)', 'rgb(34, 53, 101)', 'rgb(37, 212, 247)', 'rgb(59, 88, 181)', 'rgb(56, 75, 126)',\n 'rgb(99, 133, 242)']\n\n# Using 'pull' on Rank 5 to accentuate the frequency of the most excessive use of force by police\n# 'hole' determines the size of the donut chart\nfig = go.Figure(data=[go.Pie(labels=labels,\n values=values, pull=[0, 0.2, 0, 0, 0, 0],\n hole=.3,\n name='Blue Witness',\n marker_colors=bw_colors)])\n\n# Displaying our donut chart\nfig.update(layout_title_text='Percentage of Tweets Reporting Police Violence per Force Rank')\nfig = go.Figure(fig)\nfig.show()",
"_____no_output_____"
]
],
[
[
"# Preparing Data for BERT\n\nSplitting dataframe into training and testing sets before converting to parquet for later reference/resource.",
"_____no_output_____"
]
],
[
[
"def parquet_and_split():\n '''\n Splits our data into a format amicable to NLP modeling.\n Saves our original dataframe as well as the two split dataframes into parquet files for later reference/use.\n \n -----\n Parameters \n ------\n None\n Returns\n -------\n df: pandas dataframes\n Contains two split dataframes ready to be fit to and tested against a model\n '''\n # Splitting dataframe into training and testing sets for modeling\n # 20% of our data will be reserved for testing\n training, testing = train_test_split(force_ranks, test_size=0.2)\n # Sanity Check\n if force_ranks.shape[0] == training.shape[0] + testing.shape[0]:\n print(\"Sanity Check - Succesful!\")\n else:\n print(\"Sanity Check - Unsuccessful!\")\n # Converting dataframes to parquet format for later reference\n # Using parquet as our new dataset storage format as they cannot be edited like CSVs can. They are immutable.\n # For viewing in vscode, install the parquet-viewer extension: https://marketplace.visualstudio.com/items?itemName=dvirtz.parquet-viewer\n training.to_parquet('synthetic_training.parquet')\n testing.to_parquet('synthetic_testing.parquet')\n force_ranks.to_parquet('synthetic_complete.parquet')\n\n return training, testing",
"_____no_output_____"
],
[
"training, testing = parquet_and_split()",
"_____no_output_____"
]
],
[
[
"# BERT",
"_____no_output_____"
],
[
"## Training our NLP Multi-Class Classification Model",
"_____no_output_____"
]
],
[
[
"def bert_trainer(df, output_dir: str, epochs: int):\n start = time()\n max_len = 280\n if torch.cuda.is_available():\n print(\"CUDA Active\")\n device = torch.device(\"cuda\")\n else:\n print(\"CPU Active\")\n device = torch.device(\"cpu\")\n sentences = df[\"tweets\"].values\n labels = df[\"labels\"].values\n tokenizer = BertTokenizer.from_pretrained(\n 'bert-base-uncased',\n do_lower_case=True,\n )\n inputs = [\n tokenizer.encode(sent, add_special_tokens=True) for sent in sentences\n ]\n inputs_ids = pad_sequences(\n inputs,\n maxlen=max_len,\n dtype=\"long\",\n value=0,\n truncating=\"post\",\n padding=\"post\",\n )\n attention_masks = [\n [int(token_id != 0) for token_id in sent] for sent in inputs_ids\n ]\n train_inputs = torch.tensor(inputs_ids)\n train_labels = torch.tensor(labels)\n train_masks = torch.tensor(attention_masks)\n batch_size = 32\n train_data = TensorDataset(train_inputs, train_masks, train_labels)\n train_sampler = RandomSampler(train_data)\n train_dataloader = DataLoader(\n train_data,\n sampler=train_sampler,\n batch_size=batch_size,\n )\n model = BertForSequenceClassification.from_pretrained(\n 'bert-base-uncased',\n num_labels=6,\n output_attentions=False,\n output_hidden_states=False,\n )\n if torch.cuda.is_available():\n model.cuda()\n optimizer = AdamW(model.parameters(), lr=2e-5, eps=1e-8)\n total_steps = len(train_dataloader) * epochs\n scheduler = get_linear_schedule_with_warmup(\n optimizer,\n num_warmup_steps=0,\n num_training_steps=total_steps,\n )\n loss_values = []\n print('\\nTraining...')\n for epoch_i in range(1, epochs + 1):\n print(f\"\\nEpoch: {epoch_i}\")\n total_loss = 0\n model.train()\n for step, batch in enumerate(train_dataloader):\n b_input_ids = batch[0].to(device)\n b_input_mask = batch[1].to(device)\n b_labels = batch[2].to(device)\n model.zero_grad()\n outputs = model(\n b_input_ids,\n token_type_ids=None,\n attention_mask=b_input_mask,\n labels=b_labels,\n )\n loss = outputs[0]\n total_loss += loss.item()\n loss.backward()\n torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)\n optimizer.step()\n scheduler.step()\n\n avg_train_loss = total_loss / len(train_dataloader)\n loss_values.append(avg_train_loss)\n print(f\"Average Loss: {avg_train_loss}\")\n\n if not os.path.exists(output_dir):\n os.makedirs(output_dir)\n print(f\"\\nSaving model to {output_dir}\")\n model_to_save = model.module if hasattr(model, 'module') else model\n model_to_save.save_pretrained(output_dir)\n tokenizer.save_pretrained(output_dir)\n end = time()\n total_run_time_in_hours = (((end - start)/60)/60)\n rounded_total_run_time_in_hours = np.round(total_run_time_in_hours, decimals=2)\n print(f\"Finished training in {rounded_total_run_time_in_hours} hours!\")",
"_____no_output_____"
],
[
"!nvidia-smi\n# If running on Colab, the best GPU to have in use is the NVIDIA Tesla P100",
"Mon Sep 20 21:42:31 2021 \n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 470.63.01 Driver Version: 460.32.03 CUDA Version: 11.2 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n| | | MIG M. |\n|===============================+======================+======================|\n| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |\n| N/A 58C P8 10W / 70W | 3MiB / 15109MiB | 0% Default |\n| | | N/A |\n+-------------------------------+----------------------+----------------------+\n \n+-----------------------------------------------------------------------------+\n| Processes: |\n| GPU GI CI PID Type Process name GPU Memory |\n| ID ID Usage |\n|=============================================================================|\n| No running processes found |\n+-----------------------------------------------------------------------------+\n"
],
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n"
],
[
"# Colab notebook may crash the first time this code cell is run. \n# Running this cell again after runtime restart shouldn't produce any more issues.\nbert_trainer(training, 'saved_model', epochs=50)",
"CUDA Active\n"
]
],
[
[
"## Making Predictions",
"_____no_output_____"
]
],
[
[
"class FrankenBert:\n \"\"\"\n Implements BertForSequenceClassification and BertTokenizer\n for binary classification from a saved model\n \"\"\"\n\n def __init__(self, path: str):\n \"\"\"\n If there's a GPU available, tell PyTorch to use the GPU.\n Loads model and tokenizer from saved model directory (path)\n \"\"\"\n if torch.cuda.is_available():\n self.device = torch.device('cuda')\n else:\n self.device = torch.device('cpu')\n self.model = BertForSequenceClassification.from_pretrained(path)\n self.tokenizer = BertTokenizer.from_pretrained(path)\n self.model.to(self.device)\n\n def predict(self, text: str):\n \"\"\"\n Makes a binary classification prediction based on saved model\n \"\"\"\n inputs = self.tokenizer(\n text,\n padding=True,\n truncation=True,\n max_length=280,\n return_tensors='pt',\n ).to(self.device)\n output = self.model(**inputs)\n prediction = output[0].softmax(1)\n tensors = prediction.detach().cpu().numpy()\n result = np.argmax(tensors)\n confidence = tensors[0][result]\n return f\"Rank: {result}, {100 * confidence:.2f}%\"",
"_____no_output_____"
],
[
"model = FrankenBert('saved_model')",
"_____no_output_____"
],
[
"model.predict(\"Mickey Mouse is in the house\")",
"_____no_output_____"
],
[
"model.predict(\"Cops gave me a speeding ticket for walking too fast\")",
"_____no_output_____"
],
[
"model.predict(\"Officer Kelly was shot and killed\")",
"_____no_output_____"
],
[
"model.predict(\"A Texas Department of Public Safety (DPS) trooper ran over and killed a man who was in the road near the State Capitol early Thursday morning, according to the Austin Police Department (APD). The crash happened at around 3:45 a.m. Thursday just west of the Texas State Capitol building. The trooper was heading northbound on Colorado Street and as he was turning left on 13th Street, the trooper hit the pedestrian. DPS said the crash happened while the trooper was patrolling the area.\")",
"_____no_output_____"
],
[
"model.predict(\"Cop ran me over with his SUV\")",
"_____no_output_____"
],
[
"model.predict(\"Cops hit her with a baton\")",
"_____no_output_____"
],
[
"model.predict(\"Cops sprayed my mom with pepper spray\")",
"_____no_output_____"
],
[
"model.predict(\"Cops shot rubber bullets at the crowd\")",
"_____no_output_____"
],
[
"model.predict(\"Police used tear gas on a pedestrian for no reason\")",
"_____no_output_____"
],
[
"model.predict(\"Cops killed that woman\")",
"_____no_output_____"
],
[
"model.predict(\"Yesterday I saw a policeman hit a poor person behind my house. I wonder whats going on\")",
"_____no_output_____"
],
[
"model.predict(\"Man ran up to me and pepper sprayed me. I've called the cops, but they have not gotten themselves involved yet.\")",
"_____no_output_____"
]
],
[
[
"## Saving Trained Model ",
"_____no_output_____"
]
],
[
[
"from google.colab import drive\ndrive.mount('/content/gdrive')\n",
"Mounted at /content/gdrive\n"
],
[
"#path that contains folder you want to copy\n%cd /content/gdrive/MyDrive/ColabNotebooks/Labs/saved_model\n\n# copy local folder to folder on Google Drive \n%cp -av /content/saved_model saved_model",
"/content/gdrive/MyDrive/ColabNotebooks/Labs/saved_model\n'/content/saved_model' -> 'saved_model'\n'/content/saved_model/.ipynb_checkpoints' -> 'saved_model/.ipynb_checkpoints'\n'/content/saved_model/config.json' -> 'saved_model/config.json'\n'/content/saved_model/pytorch_model.bin' -> 'saved_model/pytorch_model.bin'\n'/content/saved_model/tokenizer_config.json' -> 'saved_model/tokenizer_config.json'\n'/content/saved_model/special_tokens_map.json' -> 'saved_model/special_tokens_map.json'\n'/content/saved_model/vocab.txt' -> 'saved_model/vocab.txt'\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0d881c4e179d441016fd8749274bbb39fea922d | 219,822 | ipynb | Jupyter Notebook | programming_for_biology/data_analysis/visualization_and_statistics.ipynb | janhohenheim/programming-for-biology | d895a20a9591888616af2bae2e2d87feebf6d60a | [
"MIT"
] | null | null | null | programming_for_biology/data_analysis/visualization_and_statistics.ipynb | janhohenheim/programming-for-biology | d895a20a9591888616af2bae2e2d87feebf6d60a | [
"MIT"
] | null | null | null | programming_for_biology/data_analysis/visualization_and_statistics.ipynb | janhohenheim/programming-for-biology | d895a20a9591888616af2bae2e2d87feebf6d60a | [
"MIT"
] | null | null | null | 820.231343 | 146,390 | 0.955259 | [
[
[
"from programming_for_biology.data_analysis.cell_polygons import read_disc, Coordinates\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.patches import Polygon\nfrom matplotlib.collections import PatchCollection\nimport scipy\nfrom scipy import stats",
"_____no_output_____"
],
[
"\n#function to draw the wing disc\ndef draw_disc(cpx, cpy, area, size):\n #input arguments: \n ## cpx, cpy: x,y/positions of the vertices of all cells \n\t# format: list (1 element per cell) of sublists (1 number per vertex, eg 3 numbers for a triangle). \n ## area: cell area\n\t# format: 1-dimentsional numpy array (1 number per cell)\n ## size: 'large' for the large disc and 'small' for the small disc\n \n polygs = []\n for i in range(len(cpx)):\n \tpolyg = []\n \tfor j in range(len(cpx[i])):\n \t\tpolyg.append([cpx[i][j], cpy[i][j]])\n \tpolygs.append(Polygon(polyg))\n patches = PatchCollection(polygs)\n patches.set_cmap('jet')\n colors = 1 * area\n colors[colors>14] = 14 # color value for all the mitotic cells (area>14) is set to 14\n patches.set_array(np.array(colors)) #for colors\n\n fig = plt.figure()\n panel = fig.add_subplot(1,1,1)\n panel.add_collection(patches)\n color_bar = fig.colorbar(patches)\n color_bar.set_label('Cell area (um2)', rotation = 270, labelpad = 15)\n panel.set_xlim(-120, 110)\n panel.set_ylim(-85, 85)\n panel.set_aspect('equal')\n plt.title(size+' wing disc')\n plt.show()",
"_____no_output_____"
],
[
"disc = read_disc(\"wd-large\")\nareas = np.array([polygon.area() for polygon in disc.polygons])\ndistances = np.array([\n polygon.centroid().distance_to(Coordinates.center())\n for polygon in disc.polygons\n])\nhalf_distance = np.max(distances) / 2\n\n\np_fit = np.polyfit(distances, areas, 1)\nprint(\"slope:\", p_fit[0])\nprint(\"intercept:\", p_fit[1])\n\nt_statistic, p_value = scipy.stats.ttest_ind(areas[distances <= half_distance], areas[distances > half_distance])\nprint(\"t-statistic:\", t_statistic)\nprint(\"p-value:\", p_value)",
"slope: 0.010858185096558044\nintercept: 3.21155990997509\nt-statistic: -11.032177682807816\np-value: 4.680367606457902e-28\n"
],
[
"plt.title(\"Area vs. distance to center\")\nplt.plot(distances, areas, \".\")\nplt.xlabel(\"Distance to center [$\\mu$m]\")\nplt.ylabel(\"Area [$\\mu$m$^2$]\")\nplt.plot(np.linspace(0, max(distances), 100), np.polyval(p_fit, np.linspace(0, max(distances), 100)))\nplt.grid()\nplt.show()",
"_____no_output_____"
],
[
"cpx = [[c.x for c in p.coordinates()] for p in disc.polygons]\ncpy = [[c.y for c in p.coordinates()] for p in disc.polygons]\narea = np.array([p.area() for p in disc.polygons])\ndraw_disc(cpx, cpy, area, 'large')",
"_____no_output_____"
],
[
"disc = read_disc(\"wd-small\")\nareas = np.array([polygon.area() for polygon in disc.polygons])\ndistances = np.array([\n polygon.centroid().distance_to(Coordinates.center())\n for polygon in disc.polygons\n])\nhalf_distance = np.max(distances) / 2\n\n\np_fit = np.polyfit(distances, areas, 1)\nprint(\"slope:\", p_fit[0])\nprint(\"intercept:\", p_fit[1])\n\nt_statistic, p_value = scipy.stats.ttest_ind(areas[distances <= half_distance], areas[distances > half_distance])\nprint(\"t-statistic:\", t_statistic)\nprint(\"p-value:\", p_value)",
"slope: -0.05616343784335852\nintercept: 4.837544537680997\nt-statistic: 1.5550916739106937\np-value: 0.12096808418350641\n"
],
[
"plt.title(\"Area vs. distance to center\")\nplt.plot(distances, areas, \".\")\nplt.xlabel(\"Distance to center [$\\mu$m]\")\nplt.ylabel(\"Area [$\\mu$m$^2$]\")\nplt.plot(np.linspace(0, max(distances), 100), np.polyval(p_fit, np.linspace(0, max(distances), 100)))\nplt.grid()\nplt.show()",
"_____no_output_____"
],
[
"cpx = [[c.x for c in p.coordinates()] for p in disc.polygons]\ncpy = [[c.y for c in p.coordinates()] for p in disc.polygons]\narea = np.array([p.area() for p in disc.polygons])\ndraw_disc(cpx, cpy, area, 'small')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0d881d53d8012e66e933e567d0920e70921d71f | 17,577 | ipynb | Jupyter Notebook | assets/all_html/2019_11_25_HW8_clean.ipynb | dskw1/dskw1.github.io | ee85aaa7c99c4320cfac95e26063beaac3ae6fcb | [
"MIT"
] | null | null | null | assets/all_html/2019_11_25_HW8_clean.ipynb | dskw1/dskw1.github.io | ee85aaa7c99c4320cfac95e26063beaac3ae6fcb | [
"MIT"
] | 1 | 2022-03-24T18:28:16.000Z | 2022-03-24T18:28:16.000Z | assets/all_html/2019_11_25_HW8_clean.ipynb | dskw1/dskw1.github.io | ee85aaa7c99c4320cfac95e26063beaac3ae6fcb | [
"MIT"
] | 1 | 2021-09-01T16:54:38.000Z | 2021-09-01T16:54:38.000Z | 38.377729 | 1,146 | 0.519884 | [
[
[
"# HW8: Topic Modeling",
"_____no_output_____"
]
],
[
[
"## =======================================================\n## IMPORTING\n## =======================================================\nimport os\ndef get_data_from_files(path):\n directory = os.listdir(path)\n results = []\n for file in directory:\n f=open(path+file, encoding = \"ISO-8859-1\")\n results.append(f.read())\n f.close()\n return results\n\n\n## =======================================================\n## MODELING\n## =======================================================\nimport pandas as pd\nfrom sklearn.decomposition import LatentDirichletAllocation\nfrom sklearn.feature_extraction.text import CountVectorizer\nimport gensim\nfrom gensim.utils import simple_preprocess\nfrom gensim.parsing.preprocessing import STOPWORDS\n\ndef run_lda(data, num_topics, stop_words):\n cv = CountVectorizer(stop_words = stop_words)\n lda_vec = cv.fit_transform(data)\n lda_columns = cv.get_feature_names()\n corpus = pd.DataFrame(lda_vec.toarray(), columns = lda_columns)\n lda = LatentDirichletAllocation(n_components=num_topics, max_iter=10, \n learning_method='online')\n lda_model = lda.fit_transform(lda_vec)\n print_topics(lda, cv)\n return lda_model, lda, lda_vec, cv, corpus\n\n\n## =======================================================\n## HELPERS\n## =======================================================\nimport numpy as np\nnp.random.seed(210)\n\ndef print_topics(model, vectorizer, top_n=10):\n for idx, topic in enumerate(model.components_):\n print(\"Topic %d:\" % (idx))\n print([(vectorizer.get_feature_names()[i], topic[i])\n for i in topic.argsort()[:-top_n - 1:-1]])\n \n\n## =======================================================\n## VISUALIZING\n## ======================================================= \nimport pyLDAvis.sklearn as LDAvis\nimport pyLDAvis\n\ndef start_vis(lda, lda_vec, cv):\n panel = LDAvis.prepare(lda, lda_vec, cv, mds='tsne')\n# pyLDAvis.show(panel)\n pyLDAvis.save_html(panel, 'FinalProject_lda_2.html')",
"_____no_output_____"
],
[
"df = pd.read_csv('../death_row_discritized.csv')\n\ndef to_string(tokens):\n try:\n return \" \".join(eval(tokens))\n except:\n return \"error\"\n \ndf['statement_string'] = df.apply(lambda x: to_string(x['last_statement']), axis=1)\n# y=df['vic_kid'].values\ny=df['prior_record'].values\ny_labels = list(set(y))\nX=df['statement_string'].values\n\nall_df = pd.DataFrame(X)\nall_df['labels'] = y\nall_df",
"_____no_output_____"
],
[
"# data = get_data_from_files('Dog_Hike/')\n# lda_model, lda, lda_vec, cv = run_lda(data,)\nfrom sklearn.feature_extraction import text \nstop_words = text.ENGLISH_STOP_WORDS\n\n# data_fd = get_data_from_files('110/110-f-d/')\n# data_fr = get_data_from_files('110/110-f-r/')\n\n# data = data_fd + data_fr\n# data\n\n\nlda_model, lda, lda_vec, cv, corpus = run_lda(all_df[0].values, 4, stop_words)\nstart_vis(lda, lda_vec, cv)",
"Topic 0:\n[('police', 15.498393763666396), ('officer', 13.293726335860406), ('justice', 8.575686139667397), ('ye', 8.118290540812445), ('coldblooded', 7.2103022374946715), ('hollered', 7.208622163389597), ('equal', 7.191651994116875), ('human', 6.861131532143653), ('shall', 6.746579925770169), ('extend', 6.1953650299434075)]\nTopic 1:\n[('holy', 10.411963107103821), ('pinkerton', 8.891806924562207), ('live', 5.73994464843442), ('muslim', 4.872920285114169), ('islam', 4.382778789675088), ('express', 4.231767059514237), ('moment', 4.15625504724466), ('allah', 3.95341584327608), ('dungeon', 3.4900957139663564), ('fear', 3.484549451930885)]\nTopic 2:\n[('black', 10.576045681366006), ('forward', 10.079796718480985), ('lynching', 8.866536281296659), ('america', 8.624390050450529), ('continue', 7.877887015179041), ('happening', 7.764868809289017), ('state', 7.313737231383107), ('marching', 6.300665405590482), ('carry', 6.292434820202825), ('people', 6.108769496284327)]\nTopic 3:\n[('first_person_pronoun', 4163.850036911061), ('pronoun', 2575.8194241030806), ('love', 605.8537134654808), ('family', 286.6706374354036), ('know', 280.53421731789035), ('thank', 236.92298282232363), ('sorry', 224.59967868722484), ('want', 206.70383682285978), ('god', 189.26574251976234), ('yall', 188.68634502013964)]\n"
],
[
"# corpus\n\n# c2 = corpus.append(df.sum().rename('Total'))\nct = corpus.T\nct['total'] = ct.sum(axis=1)\nbig_total = ct[ct['total'] > 68]\nlen(big_total)",
"_____no_output_____"
],
[
"len(ct)",
"_____no_output_____"
],
[
"btt = big_total.T",
"_____no_output_____"
],
[
"additional_stopwords = btt.columns",
"_____no_output_____"
],
[
"\n\nstop_words = text.ENGLISH_STOP_WORDS.union(additional_stopwords)",
"_____no_output_____"
],
[
"stop_words",
"_____no_output_____"
],
[
"\nlda_model, lda, lda_vec, cv, corpus = run_lda(data, 40, stop_words)",
"_____no_output_____"
],
[
"start_vis(lda, lda_vec, cv)",
"/Users/danielcaraway/anaconda3/lib/python3.7/site-packages/pyLDAvis/_prepare.py:257: FutureWarning: Sorting because non-concatenation axis is not aligned. A future version\nof pandas will change to not sort by default.\n\nTo accept the future behavior, pass 'sort=False'.\n\nTo retain the current behavior and silence the warning, pass 'sort=True'.\n\n return pd.concat([default_term_info] + list(topic_dfs))\n"
],
[
"import plotly.plotly as py\nfrom plotly.grid_objs import Grid, Column\nfrom plotly.tools import FigureFactory as FF\n\nimport pandas as pd\nimport time\n\n\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0d8a6a71aed7964f5ababf3ce5c63a497d730c2 | 4,038 | ipynb | Jupyter Notebook | image-tools.ipynb | AI-Force/aiforce | a64e1d8ea1a6c822cd9742f44371d5402b5e6435 | [
"Apache-2.0"
] | 1 | 2021-06-24T04:04:24.000Z | 2021-06-24T04:04:24.000Z | image-tools.ipynb | AI-Force/aiforce | a64e1d8ea1a6c822cd9742f44371d5402b5e6435 | [
"Apache-2.0"
] | null | null | null | image-tools.ipynb | AI-Force/aiforce | a64e1d8ea1a6c822cd9742f44371d5402b5e6435 | [
"Apache-2.0"
] | null | null | null | 25.884615 | 75 | 0.562902 | [
[
[
"# default_exp image.tools",
"_____no_output_____"
],
[
"# hide\nfrom nbdev.showdoc import *",
"_____no_output_____"
],
[
"# export\nfrom enum import Enum",
"_____no_output_____"
],
[
"# hide\n%reload_ext autoreload\n%autoreload 2",
"_____no_output_____"
]
],
[
[
"# Image Tools\n> Tools for images.",
"_____no_output_____"
]
],
[
[
"# export\nclass ImageOrientation(Enum):\n \"\"\"\n Image EXIF orientations.\n \"\"\"\n TOP = 1\n TOP_FLIPPED = 2\n BOTTOM = 3\n BOTTOM_FLIPPED = 4\n RIGHT_FLIPPED = 5\n RIGHT = 6\n LEFT_FLIPPED = 7\n LEFT = 8\n\n def __str__(self):\n return str(self.value)",
"_____no_output_____"
],
[
"# export\ndef get_image_scale(image_size, target_size):\n \"\"\"\n Calculates the scale of the image to fit the target size.\n `image_size`: The image size as tuple of (w, h)\n `target_size`: The target size as tuple of (w, h).\n :return: The image scale as tuple of (w_scale, h_scale)\n \"\"\"\n (image_w, image_h) = image_size\n (target_w, target_h) = target_size\n scale = (target_w / float(image_w), target_h / float(image_h))\n return scale\n",
"_____no_output_____"
],
[
"# hide\n\n# for generating scripts from notebook directly\nfrom nbdev.export import notebook2script\nnotebook2script()",
"Converted annotation-core.ipynb.\nConverted annotation-folder_category_adapter.ipynb.\nConverted annotation-multi_category_adapter.ipynb.\nConverted annotation-via_adapter.ipynb.\nConverted annotation-yolo_adapter.ipynb.\nConverted annotation_converter.ipynb.\nConverted annotation_viewer.ipynb.\nConverted category_tools.ipynb.\nConverted core.ipynb.\nConverted dataset-core.ipynb.\nConverted dataset-image_classification.ipynb.\nConverted dataset-image_object_detection.ipynb.\nConverted dataset-image_segmentation.ipynb.\nConverted dataset-type.ipynb.\nConverted dataset_generator.ipynb.\nConverted evaluation-core.ipynb.\nConverted geometry.ipynb.\nConverted image-color_palette.ipynb.\nConverted image-inference.ipynb.\nConverted image-opencv_tools.ipynb.\nConverted image-pillow_tools.ipynb.\nConverted image-tools.ipynb.\nConverted index.ipynb.\nConverted io-core.ipynb.\nConverted tensorflow-tflite_converter.ipynb.\nConverted tensorflow-tflite_metadata.ipynb.\nConverted tensorflow-tfrecord_builder.ipynb.\nConverted tools-check_double_images.ipynb.\nConverted tools-downloader.ipynb.\nConverted tools-image_size_calculator.ipynb.\n"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0d8c1fde909edbd87bd5c812a451a9b18eed11c | 5,389 | ipynb | Jupyter Notebook | Week 3/Normal-Equation-Gradient-Descent-Tips.ipynb | thanhhff/AIVN-Course-AI-For-Everyone | e8e582dea304341f0c03cedb920bcd1d450e5a9c | [
"MIT"
] | 25 | 2019-11-24T03:15:22.000Z | 2021-12-29T07:23:19.000Z | Week 3/Normal-Equation-Gradient-Descent-Tips.ipynb | hyperstar1/AIVN-Machine-Learning | e8e582dea304341f0c03cedb920bcd1d450e5a9c | [
"MIT"
] | 1 | 2019-12-03T10:44:48.000Z | 2019-12-03T10:44:48.000Z | Week 3/Normal-Equation-Gradient-Descent-Tips.ipynb | hyperstar1/AIVN-Machine-Learning | e8e582dea304341f0c03cedb920bcd1d450e5a9c | [
"MIT"
] | 13 | 2019-11-24T04:33:42.000Z | 2022-03-02T10:58:14.000Z | 35.688742 | 324 | 0.570607 | [
[
[
"### 1. Gradient Descent Tips \n\n*Nhắc lại*: Công thức cập nhật $\\theta$ ở vòng lặp thứ $t$:\n<center>$\\theta_{t+1} := \\theta_t - \\alpha \\Delta_{\\theta} f(\\theta_t)$</center>\n\nTrong đó:\n- $\\alpha$: learning rate - tốc độ học tập.\n- $\\Delta_{\\theta} f(\\theta_t)$: đạo hàm của hàm số tại điểm $\\theta$.\n\nViệc lựa chọn giá trị $\\alpha$ (learning rate) rất quan trọng. Nó quyết định việc bài toán có thể hội tụ tới giá trị global minimum cho hàm $f(\\theta)$ hay không. Gradient Descent có thể làm việc hiệu quả hơn bằng cách chọn Learning Rate phù hợp.\n\nMột số trường hợp về lựa chọn Learning Rate như sau (các bạn có thể thử thay đổi [tại đây](https://developers.google.com/machine-learning/crash-course/fitter/graph)):\n\n**Learning Rate quá lớn** - Gradient Descent không thể hội tụ được về giá trị Minimum.\n\n<img src=\"images/image-3.gif\" style=\"width:50%;height:50%;\">\n\n**Learning Rate quá nhỏ**: - Gradient Descent có thể hội tụ được về giá trị Minimum trong bài toán này, nhưng mất tới 81 vòng lặp để hội tụ. Trong một số bài toán có nhiều giá trị cực tiểu địa phương - LR quá nhỏ có thể khiến hàm số bị mắc kẹt tại cực tiểu địa phương và không bao giờ hội tụ được về giá trị tối ưu.\n\n<img src=\"images/image-5.png\" style=\"width:50%;height:50%;\">\n\n",
"_____no_output_____"
],
[
"**Learning Rate vừa:** nếu Learning Rate quá nhỏ khiến bài toán hội tụ lâu, các bạn hãy thử tăng giá trị này lên thêm. Trong bài toán này khi Learning Rate = 1.0 sẽ mất 6 vòng lặp để hội tụ.\n\n<img src=\"images/image-4.png\" style=\"width:50%;height:50%;\">\n\n**Learning Rate tối ưu:** trong thực tế rất khó có thể tìm ra được giá trị Learning Rate tối ưu. Việc tìm được giá trị Learning Rate tương đối với giá trị tối ưu sẽ giúp bài toán hội tụ nhanh hơn.\n\n<img src=\"images/image-6.png\" style=\"width:50%;height:50%;\">\n\n\n**Tổng kết:** \n- Nếu Learning Rate quá nhỏ: mất quá nhiều thời gian để hội tụ, đồng thời có thể bị mắc kẹt ở cực tiểu địa phương.\n- Nếu Learning Rate quá lớn: không thể hội tụ được.",
"_____no_output_____"
],
[
"### Một vài lời khuyên \n\n- Trước khi bắt đầu bài toán, các bạn hãy chuẩn hoá dữ liệu về khoảng [-1;1] hay [0;1] sẽ giúp bài toán hội tụ nhanh hơn.\n- Bắt đầu bài toán bằng Learning Rate nhỏ. Tăng dần Learning Rate nếu không thấy phù hợp.\n- Với bài toán nhiều dữ liệu hãy sử dụng Mini-batch Gradient Descent (phương pháp này sẽ được đề cập trong bài tới).\n- Sử dụng Momentum cho Gradient Descent (phương pháp này sẽ được đề cập trong bài tới).",
"_____no_output_____"
],
[
"### 2. Normal Equation\n\nNormal Equation là phương pháp tìm nghiệm của bài toán Linear Regression mà không cần tới vòng lặp, không cần lựa chọn Learning Rate. Và cũng không cần phải Scaling dữ liệu.\n\nCông thức toán đằng sau nghiệm của phương trình này các bạn có thể đọc thêm tại:\n\nhttps://eli.thegreenplace.net/2014/derivation-of-the-normal-equation-for-linear-regression\n\nVà công thức quan trọng nhất của chúng ta:\n\n<center> $\\theta = (X^T X)^{-1} X^Ty $ </center>",
"_____no_output_____"
],
[
"So sánh giữa Normal Equation và Gradient Descent:\n\n<table>\n <tr>\n <td> Gradient Descent </td>\n <td> Normal Equation </td>\n </tr>\n\n <tr>\n <td> Cần phải chọn Learning Rate </td>\n <td> Không cần chọn Learning Rate </td> \n </tr>\n \n <tr>\n <td> Cần nhiều vòng lặp </td>\n <td> Không cần vòng lặp </td>\n </tr>\n \n <tr>\n <td> Thời gian tính: $O(kn^2)$ </td>\n <td> Thời gian tính: $O(n^3)$, cần phải tính ma trận nghịch đảo </td> \n </tr>\n \n <tr>\n <td> Hoạt động tốt với dữ liệu lớn </td>\n <td> Rất chậm với dữ liệu lớn </td> \n </tr>\n \n</table>",
"_____no_output_____"
],
[
"Với Normal Equation, việc tính toán mất thời gian $O(n^3)$ nên với dữ liệu lớn (n > 10.000 dữ liệu) chúng ta nên sử dụng Gradient Descent.",
"_____no_output_____"
],
[
"### Tài liệu tham khảo \n\n[1] [CS229 - Machine Learning Course](http://cs229.stanford.edu)",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0d8c77a24f844de213e070d9737913e111a255e | 30,956 | ipynb | Jupyter Notebook | Graph/Untitled2.ipynb | robin9804/Jupyter_project | 42b026994be362317b5a61866c8f1b1c920f9f10 | [
"MIT"
] | null | null | null | Graph/Untitled2.ipynb | robin9804/Jupyter_project | 42b026994be362317b5a61866c8f1b1c920f9f10 | [
"MIT"
] | 1 | 2020-06-24T13:41:54.000Z | 2020-06-24T13:41:54.000Z | Graph/Untitled2.ipynb | robin9804/Jupyter_project | 42b026994be362317b5a61866c8f1b1c920f9f10 | [
"MIT"
] | null | null | null | 30,956 | 30,956 | 0.875371 | [
[
[
"import math\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nF = np.array([[-1,0],[0,1]])\n\nACB = [150, -60, 60, -90]\nABC = [150, 60, -60, 30]\nBAC = [-90, -60, 60, 30]\nBCA = [-90, 60, -60, 150]\nCBA = [30, -60, 60, 150]\nCAB = [30, 60, -60, -90]\n\ndel_s=np.exp(1.31j)\ndel_p=np.exp(2.05j)\n\nP=np.array([[del_s, 0],[0, del_p]])\n\ndef Rotation(ang):\n return np.array([[np.cos(ang),np.sin(ang)], [-np.sin(ang),np.cos(ang)]])\n\ndef total_matrix(direction):\n r0 = np.dot(P, Rotation(direction[0]))\n r1 = np.dot(Rotation(direction[1]),r0)\n r1 = np.dot(P,r1)\n r2 = np.dot(Rotation(direction[2]),r1)\n r2 = np.dot(P,r2)\n r3 = np.dot(Rotation(direction[3]),r2)\n return np.dot(F,r3)\n\ndef OutSignal(side,ang_in,ang_pol):\n Pin = np.array([[np.cos(math.radians(ang_in))],[np.sin(math.radians(ang_in))]])\n Pout = abs(np.dot(total_matrix(side),Pin))\n Ex = Pout[[0],[0]] * np.cos(math.radians(ang_pol))\n Ey = Pout[[1],[0]] * np.sin(math.radians(ang_pol))\n return np.sqrt(Ex**2 + Ey**2)\n\ndef graph(inPol,polcam):\n pixel = np.zeros((400,400))\n\n #inPol = 90\n #polcam = 90\n for x in range(400):\n for y in range(400):\n if y < 200 and x < (y-200)/np.sqrt(3) + 200: #CAB\n color = 256 * OutSignal(CAB,inPol,polcam)\n pixel[[x],[y]]=color\n elif x > (y-200)/np.sqrt(3) + 200 and x < -(y-200)/np.sqrt(3) + 200: #CBA\n color = 256 * OutSignal(CBA,inPol,polcam)\n pixel[[x],[y]]=color\n elif x > -(y-200)/np.sqrt(3) + 200 and y < 200 : #BCA\n color = 256 * OutSignal(BCA,inPol,polcam)\n pixel[[x],[y]]=color\n elif y > 200 and x > (y-200)/np.sqrt(3) + 200 : #BAC\n color = 256 * OutSignal(BAC,inPol,polcam)\n pixel[[x],[y]]=color\n elif x < (y-200)/np.sqrt(3) + 200 and x > -(y-200)/np.sqrt(3) + 200: #ABC\n color = 256 * OutSignal(ABC,inPol,polcam)\n pixel[[x],[y]]=color\n elif y > 200 and x < -(y-200)/np.sqrt(3) + 200: #CAB\n color = 256 * OutSignal(CAB,inPol,polcam)\n pixel[[x],[y]]=color\n \n return pixel\n\n",
"_____no_output_____"
],
[
" InPol = 90\n\n plt.subplot(221)\n plt.imshow(graph(InPol,0))\n plt.show()\n plt.subplot(222)\n plt.imshow(graph(InPol,45))\n plt.show()\n plt.subplot(223)\n plt.imshow(graph(InPol,90))\n plt.show()\n plt.subplot(224)\n plt.imshow(graph(InPol,135))\n plt.colorbar()\n plt.show()",
"_____no_output_____"
],
[
"x= np.arange(0,10,0.1)\ny= np.sin(x)\n\nfig = plt.figure(figsize=(8,8))\n\nax = [plt.subplot(2,2,i+1) for i in range(4)]\n\nfor i in range(ax):\n i.plot(y)\n i.title(\"%d\"%i)\n i.set_xticklabels([])\n i.set_yticklabels([])\n i.set_aspect('equal')\n\nplt.subplots_adjust(wspace=0, hspace=0)\n'''\nplt.subplot(221)\nplt.plot(y)\nplt.title(\"1\")\nplt.set_aspect('equal')\nplt.subplot(222)\nplt.plot(y)\nplt.title(\"2\")\nplt.set_aspect('equal')\nplt.subplot(223)\nplt.plot(y)\nplt.title(\"3\")\nplt.set_aspect('equal')\nplt.subplot(224)\nplt.plot(y)\nplt.title(\"4\")\nplt.set_aspect('equal')\n'''\nplt.savefig(\"fig2.png\")",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
d0d8d48e57e2a0b97c1302bbd04d1aadf0f161cd | 20,948 | ipynb | Jupyter Notebook | pl/img_meta.ipynb | ronaldokun/isic2019 | 26d436f7ecd9efbce8834dd01aae02c2a8ad85f6 | [
"MIT"
] | null | null | null | pl/img_meta.ipynb | ronaldokun/isic2019 | 26d436f7ecd9efbce8834dd01aae02c2a8ad85f6 | [
"MIT"
] | null | null | null | pl/img_meta.ipynb | ronaldokun/isic2019 | 26d436f7ecd9efbce8834dd01aae02c2a8ad85f6 | [
"MIT"
] | null | null | null | 38.226277 | 232 | 0.50444 | [
[
[
"import warnings\nimport time\nfrom data import *\n#from data_transforms import *\nfrom apex import amp\nfrom torch import nn\nfrom torch.utils.data import Dataset, DataLoader, Subset\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.metrics import accuracy_score, roc_auc_score\nfrom sklearn.model_selection import StratifiedKFold, GroupKFold, KFold\nfrom efficientnet_pytorch import EfficientNet\nfrom catalyst.data.sampler import BalanceClassSampler\n\n#CV2\nimport cv2\n\n#Importing Tabnet\nfrom pytorch_tabnet.tab_network import TabNet\n\nimport datetime\nfrom fastprogress import master_bar, progress_bar\n%load_ext autoreload\n%autoreload 2\nwarnings.simplefilter('ignore')",
"_____no_output_____"
],
[
"batch_size=32",
"_____no_output_____"
],
[
"device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")",
"_____no_output_____"
],
[
"# Defining Categorical variables and their Indexes, embedding dimensions , number of classes each have\ndf = pd.read_csv('/data/full/folds_13062020.csv')\ndf_test =pd.read_csv('/data/full/test.csv')\ndf_test['anatom_site_general_challenge'].fillna('unknown',inplace=True)\ndf_test['target'] = 0",
"_____no_output_____"
],
[
"features = ['sex', 'age_approx', 'anatom_site_general_challenge'] \ncat = ['sex', 'anatom_site_general_challenge']\ntarget = 'target'\n\ncategorical_columns = []\n\nfor col in cat: \n print('train', col, df[col].nunique())\n print('test', col, df_test[col].nunique())\n l_enc = LabelEncoder()\n df[col] = l_enc.fit_transform(df[col].values)\n df_test[col] = l_enc.transform(df_test[col].values)",
"train sex 3\ntest sex 2\ntrain anatom_site_general_challenge 8\ntest anatom_site_general_challenge 7\n"
],
[
"class MelanomaDataset(Dataset): \n\n def __init__(self, df: pd.DataFrame, \n imfolder: (str, Path), \n train: bool = True, \n transforms = None, \n meta_features = None):\n \"\"\"\n Class initialization\n Args:\n df (pd.DataFrame): DataFrame with data description\n imfolder (str): folder with images\n train (bool): flag of whether a training dataset is being initialized or testing one\n transforms: image transformation method to be applied\n meta_features (list): list of features with meta information, such as sex and age\n \n \"\"\"\n self.df = df\n self.imfolder = imfolder\n self.transforms = transforms\n self.train = train\n self.meta_features = meta_features\n \n def __getitem__(self, index):\n im_path = Path(f\"{self.imfolder}/{self.df.iloc[index]['image_name']}.jpg\")\n x = cv2.imread(str(im_path))\n meta = torch.tensor(self.df.iloc[index][self.meta_features].values, dtype=torch.float)\n\n if self.transforms:\n x = self.transforms(x)\n \n if self.train:\n y = self.df.iloc[index]['target']\n y_meta = self.one_hot(2, y) \n return {'image': x,\n 'label': y,\n 'features': meta,\n 'target': y_meta}\n else:\n return {'image': x,\n 'label': None,\n 'features': meta,\n 'target': None}\n \n def __len__(self):\n return len(self.df)\n \n @staticmethod\n def one_hot(size, target):\n tensor = torch.zeros(size, dtype=torch.float32)\n tensor[target] = 1.\n return tensor",
"_____no_output_____"
],
[
"class CustomTabnet(nn.Module):\n def __init__(self, input_dim, output_dim,n_d=8, n_a=8,n_steps=3, gamma=1.3,\n cat_idxs=[], cat_dims=[3,8], cat_emb_dim=[3,5],n_independent=2, n_shared=2,\n momentum=0.02,mask_type=\"sparsemax\"):\n \n super(CustomTabnet, self).__init__()\n self.tabnet = TabNet(input_dim=input_dim,output_dim=output_dim, n_d=n_d, n_a=n_a,n_steps=n_steps, gamma=gamma,\n cat_idxs=cat_idxs, cat_dims=cat_dims, cat_emb_dim=cat_emb_dim,n_independent=n_independent,\n n_shared=n_shared, momentum=momentum,mask_type=\"sparsemax\")\n \n \n \n def forward(self, x):\n return self.tabnet(x)",
"_____no_output_____"
],
[
"tabnet = CustomTabnet(2, 2)",
"_____no_output_____"
],
[
"list(tabnet.tabnet.modules())[-1].in_features",
"_____no_output_____"
],
[
"net = EfficientNet.from_pretrained('efficientnet-b0')",
"Downloading: \"https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b0-355c32eb.pth\" to /content/.cache/torch/hub/checkpoints/efficientnet-b0-355c32eb.pth\n"
],
[
"class Effnet(nn.Module):\n def __init__(self, arch, input_dim, output_dim):\n super().__init__()\n self.arch = arch\n self.arch._fc = nn.Linear(in_features=1280, out_features=64, bias=True)\n self.tab = CustomTabnet(input_dim, output_dim)\n self.tab = nn.Sequential(*list(self.tab.modules())[:-1])\n self.ouput = nn.Linear(64 + 8, 1)\n \n def forward(self, inputs):\n \"\"\"\n No sigmoid in forward because we are going to use BCEWithLogitsLoss\n Which applies sigmoid for us when calculating a loss\n \"\"\"\n x, meta = inputs['image'], inputs['features']\n cnn_features = self.arch(x)\n meta_features = self.tab(meta)\n features = torch.cat((cnn_features, meta_features), dim=1)\n output = self.ouput(features)\n return output \n ",
"_____no_output_____"
],
[
"test = MelanomaDataset(df=test_df,\n imfolder=TEST, \n train=False,\n transforms=train_transform, # For TTA\n meta_features=meta_features)",
"_____no_output_____"
],
[
"import gc\n\nepochs = 15 # Number of epochs to run\nes_patience = 3 # Early Stopping patience - for how many epochs with no improvements to wait\nTTA = 3 # Test Time Augmentation rounds\n\noof = np.zeros((len(train_df), 1)) # Out Of Fold predictions\npreds = torch.zeros((len(test), 1), dtype=torch.float32, device=device) # Predictions for test test\n\nskf = KFold(n_splits=5, shuffle=True, random_state=47)",
"_____no_output_____"
],
[
"for fold,(idxT, idxV) in enumerate(list(skf.split(np.arange(15)))[3:], 4):\n print('=' * 20, 'Fold', fold, '=' * 20)\n \n train_idx = train_df.loc[train_df['fold'].isin(idxT)].index\n val_idx = train_df.loc[train_df['fold'].isin(idxV)].index\n \n \n model_path = f'/out/model_{fold}.pth' # Path and filename to save model to\n best_val = 0 # Best validation score within this fold\n patience = es_patience # Current patience counter\n arch = EfficientNet.from_pretrained('efficientnet-b1')\n model = Effnet(arch=arch, n_meta_features=len(meta_features)) # New model for each fold\n if Path(model_path).exists():\n inference = True\n \n model = model.to(device)\n optim = torch.optim.AdamW(model.parameters(), lr=0.001)\n scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer=optim, mode='max', patience=1, verbose=True, factor=0.2)\n criterion = nn.BCEWithLogitsLoss()\n \n train = MelanomaDataset(df=train_df.iloc[train_idx].reset_index(drop=True), \n imfolder=TRAIN, \n train=True, \n transforms=train_transform,\n meta_features=meta_features)\n val = MelanomaDataset(df=train_df.iloc[val_idx].reset_index(drop=True), \n imfolder=TRAIN, \n train=True, \n transforms=test_transform,\n meta_features=meta_features)\n \n train_loader = DataLoader(dataset=train, batch_size=batch_size, shuffle=True, num_workers=4)\n val_loader = DataLoader(dataset=val, batch_size=batch_size, shuffle=False, num_workers=4)\n test_loader = DataLoader(dataset=test, batch_size=batch_size, shuffle=False, num_workers=4)\n \n mb = master_bar(range(epochs))\n\n if not inference:\n \n for epoch in mb:\n start_time = time.time()\n correct = 0\n epoch_loss = 0\n model.train()\n \n for x, y in progress_bar(train_loader, parent=mb, total=int(len(train)/ 64)):\n x[0] = torch.tensor(x[0], device=device, dtype=torch.float32)\n x[1] = torch.tensor(x[1], device=device, dtype=torch.float32)\n y = torch.tensor(y, device=device, dtype=torch.float32)\n optim.zero_grad()\n z = model(x)\n loss = criterion(z, y.unsqueeze(1))\n loss.backward()\n optim.step()\n pred = torch.round(torch.sigmoid(z)) # round off sigmoid to obtain predictions\n correct += (pred.cpu() == y.cpu().unsqueeze(1)).sum().item() # tracking number of correctly predicted samples\n epoch_loss += loss.item()\n mb.child.comment = f'{epoch_loss:.4f}'\n train_acc = correct / len(train_idx)\n \n model.eval() # switch model to the evaluation mode\n val_preds = torch.zeros((len(val_idx), 1), dtype=torch.float32, device=device)\n with torch.no_grad(): # Do not calculate gradient since we are only predicting\n # Predicting on validation set\n for j, (x_val, y_val) in progress_bar(enumerate(val_loader), parent=mb, total=int(len(val)/32)):\n x_val[0] = torch.tensor(x_val[0], device=device, dtype=torch.float32)\n x_val[1] = torch.tensor(x_val[1], device=device, dtype=torch.float32)\n y_val = torch.tensor(y_val, device=device, dtype=torch.float32)\n z_val = model(x_val)\n val_pred = torch.sigmoid(z_val)\n val_preds[j*val_loader.batch_size:j*val_loader.batch_size + x_val[0].shape[0]] = val_pred\n val_acc = accuracy_score(train_df.iloc[val_idx]['target'].values, torch.round(val_preds.cpu()))\n val_roc = roc_auc_score(train_df.iloc[val_idx]['target'].values, val_preds.cpu())\n \n mb.write('Epoch {:03}: | Loss: {:.3f} | Train acc: {:.3f} | Val acc: {:.3f} | Val roc_auc: {:.3f} | Training time: {}'.format(\n epoch + 1, \n epoch_loss, \n train_acc, \n val_acc, \n val_roc, \n str(datetime.timedelta(seconds=time.time() - start_time))[:7]))\n \n scheduler.step(val_roc)\n \n if val_roc >= best_val:\n best_val = val_roc\n patience = es_patience # Resetting patience since we have new best validation accuracy\n torch.save(model, model_path) # Saving current best model\n else:\n patience -= 1\n if patience == 0:\n print('Early stopping. Best Val roc_auc: {:.3f}'.format(best_val))\n break\n \n model = torch.load(model_path) # Loading best model of this fold\n model.eval() # switch model to the evaluation mode\n val_preds = torch.zeros((len(val_idx), 1), dtype=torch.float32, device=device)\n with torch.no_grad():\n # Predicting on validation set once again to obtain data for OOF\n for j, (x_val, y_val) in progress_bar(enumerate(val_loader), total=int(len(val)/32)):\n x_val[0] = torch.tensor(x_val[0], device=device, dtype=torch.float32)\n x_val[1] = torch.tensor(x_val[1], device=device, dtype=torch.float32)\n y_val = torch.tensor(y_val, device=device, dtype=torch.float32)\n z_val = model(x_val)\n val_pred = torch.sigmoid(z_val)\n val_preds[j*val_loader.batch_size:j*val_loader.batch_size + x_val[0].shape[0]] = val_pred\n oof[val_idx] = val_preds.cpu().numpy()\n \n # Predicting on test set\n for _ in range(TTA):\n for i, x_test in progress_bar(enumerate(test_loader), parent=mb, total=len(test)//32):\n x_test[0] = torch.tensor(x_test[0], device=device, dtype=torch.float32)\n x_test[1] = torch.tensor(x_test[1], device=device, dtype=torch.float32)\n z_test = model(x_test)\n z_test = torch.sigmoid(z_test)\n preds[i*test_loader.batch_size:i*test_loader.batch_size + x_test[0].shape[0]] += z_test\n preds /= TTA\n \n del train, val, train_loader, val_loader, x, y, x_val, y_val\n gc.collect()\n \npreds /= skf.n_splits",
"==================== Fold 4 ====================\nLoaded pretrained weights for efficientnet-b1\n"
],
[
"# Saving OOF predictions so stacking would be easier\npd.Series(oof.reshape(-1,)).to_csv('oof.csv', index=False)\nsub = pd.read_csv(DATA / 'sample_submission.csv')\nsub['target'] = preds.cpu().numpy().reshape(-1,)\nsub.to_csv('/out/img_meta_submission.csv', index=False)",
"_____no_output_____"
],
[
"!kaggle competitions submit -c siim-isic-melanoma-classification -f submission.csv -m \"Melanoma Starter Image Size 384\"",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0d8dd546c08664022c8eb28110f93f21983c603 | 171,287 | ipynb | Jupyter Notebook | dataScience/02DataMining/02_Data-Computation-Analysis-Visualization/2-5_Pandas_String_Operations.ipynb | yangpanyang/DemoPractice | 2ab87654030e8fc05899ea684d51b39e3de6ccc4 | [
"Apache-2.0"
] | null | null | null | dataScience/02DataMining/02_Data-Computation-Analysis-Visualization/2-5_Pandas_String_Operations.ipynb | yangpanyang/DemoPractice | 2ab87654030e8fc05899ea684d51b39e3de6ccc4 | [
"Apache-2.0"
] | 3 | 2021-01-21T01:26:08.000Z | 2021-12-09T22:56:31.000Z | dataScience/02DataMining/02_Data-Computation-Analysis-Visualization/2-5_Pandas_String_Operations.ipynb | yangpanyang/DemoPractice | 2ab87654030e8fc05899ea684d51b39e3de6ccc4 | [
"Apache-2.0"
] | null | null | null | 184.975162 | 31,650 | 0.881807 | [
[
[
"# pandas字符串操作\n很明显除了数值型,我们处理的数据还有很多字符类型的,而这部分数据显然也非常重要,因此这个部分我们提一提pandas的字符串处理。",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n%config ZMQInteractiveShell.ast_node_interactivity='all'\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\n\npd.set_option('display.mpl_style', 'default')\nplt.rcParams['figure.figsize'] = (15, 3)\nplt.rcParams['font.family'] = 'sans-serif'",
"/opt/anaconda3/envs/python27/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2881: FutureWarning: \nmpl_style had been deprecated and will be removed in a future version.\nUse `matplotlib.pyplot.style.use` instead.\n\n exec(code_obj, self.user_global_ns, self.user_ns)\n"
]
],
[
[
"前面看到pandas在处理数值型的时候,各种如鱼得水,偷偷告诉你,pandas处理字符串也相当生猛。<br>\n咱们来读一份天气数据。",
"_____no_output_____"
]
],
[
[
"weather_2012 = pd.read_csv('./data/weather_2012.csv', parse_dates=True, index_col='Date/Time')\nweather_2012[:5]",
"_____no_output_____"
]
],
[
[
"# 5.1字符串操作",
"_____no_output_____"
],
[
"从上面的数据里面可以看到,有 'Weather' 这一列。我们这里假定包含 \"Snow\" 的才是下雪天。\n\npandas的str类型提供了一系列方便的函数,比如这里的contains,更多的例子可以查看 [这里](http://pandas.pydata.org/pandas-docs/stable/basics.html#vectorized-string-methods)。",
"_____no_output_____"
]
],
[
[
"weather_description = weather_2012['Weather']\nis_snowing = weather_description.str.contains('Snow')",
"_____no_output_____"
]
],
[
[
"你看我们contains返回的其实是布尔型的判定结果的dataframe。",
"_____no_output_____"
]
],
[
[
"# 返回bool型内容的dataframe\nis_snowing[:5]",
"_____no_output_____"
]
],
[
[
"你以为懒癌晚期的我会一个个去看吗!!图样图森破!!我一个函数就给你画出来了!!!",
"_____no_output_____"
]
],
[
[
"# 就是屌!!!\nis_snowing.plot()",
"_____no_output_____"
]
],
[
[
"# 6.2 平均气温",
"_____no_output_____"
],
[
"如果我们想知道每个月的温度值中位数,有一个很有用的函数可以调用哈,叫 `resample()` ",
"_____no_output_____"
]
],
[
[
"weather_2012['Temp (C)'].resample('M', how=np.median).plot(kind='bar')",
"/opt/anaconda3/envs/python27/lib/python2.7/site-packages/ipykernel_launcher.py:1: FutureWarning: how in .resample() is deprecated\nthe new syntax is .resample(...)..apply(<func>)\n \"\"\"Entry point for launching an IPython kernel.\n"
]
],
[
[
"符合预期对吧,7月和8月是温度最高的",
"_____no_output_____"
],
[
"你要知道,布尔型的 `True` 和 `False`其实是不便于运算的,当然,其实他们就是0和1了,所以我们转成float型去做做运算可好?",
"_____no_output_____"
]
],
[
[
"is_snowing.astype(float)[:10]",
"_____no_output_____"
]
],
[
[
"然后我们很聪明地用 `resample` 去找到每个月下雪的比例状况(为嘛感觉在做很无聊的事情,其实哪个月下雪多我们知道的对么...)",
"_____no_output_____"
]
],
[
[
"is_snowing.astype(float).resample('M', how=np.mean)",
"/opt/anaconda3/envs/python27/lib/python2.7/site-packages/ipykernel_launcher.py:1: FutureWarning: how in .resample() is deprecated\nthe new syntax is .resample(...)..apply(<func>)\n \"\"\"Entry point for launching an IPython kernel.\n"
],
[
"is_snowing.astype(float).resample('M', how=np.mean).plot(kind='bar')",
"/opt/anaconda3/envs/python27/lib/python2.7/site-packages/ipykernel_launcher.py:1: FutureWarning: how in .resample() is deprecated\nthe new syntax is .resample(...)..apply(<func>)\n \"\"\"Entry point for launching an IPython kernel.\n"
]
],
[
[
"So,你也看到了,加拿大的12月是下雪最多的月份。然后你还能观察到一些其他的端倪,比如你会发现,11月开始突然下雪,接着就雪期漫漫,虽然下雪的概率逐步减小,但是可能要到4月或者5月才会停止。",
"_____no_output_____"
],
[
"# 5.3 画一下温度和雪期",
"_____no_output_____"
],
[
"我们把温度和下雪概率放到一起,组成dataframe的2列,然后画个图",
"_____no_output_____"
]
],
[
[
"temperature = weather_2012['Temp (C)'].resample('M', how=np.median)\nis_snowing = weather_2012['Weather'].str.contains('Snow')\nsnowiness = is_snowing.astype(float).resample('M', how=np.mean)\n\n# 给列取个名字\ntemperature.name = \"Temperature\"\nsnowiness.name = \"Snowiness\"",
"/opt/anaconda3/envs/python27/lib/python2.7/site-packages/ipykernel_launcher.py:1: FutureWarning: how in .resample() is deprecated\nthe new syntax is .resample(...)..apply(<func>)\n \"\"\"Entry point for launching an IPython kernel.\n/opt/anaconda3/envs/python27/lib/python2.7/site-packages/ipykernel_launcher.py:3: FutureWarning: how in .resample() is deprecated\nthe new syntax is .resample(...)..apply(<func>)\n This is separate from the ipykernel package so we can avoid doing imports until\n"
]
],
[
[
"### 我们用concat完成字符串的拼接",
"_____no_output_____"
],
[
"用 `concat` 把这两列拼接到一列中,组成一个新的dataframe",
"_____no_output_____"
]
],
[
[
"stats = pd.concat([temperature, snowiness], axis=1)\nstats",
"_____no_output_____"
],
[
"stats.plot(kind='bar')",
"_____no_output_____"
]
],
[
[
"你发现,什么鬼!!!紫色的下雪概率呢!!!<br>\n是的亲,你这2个维度的幅度是不一样的,所以要分开画哦。",
"_____no_output_____"
]
],
[
[
"stats.plot(kind='bar', subplots=True, figsize=(15, 10))",
"_____no_output_____"
]
],
[
[
"# 总结",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n# Make the graphs a bit prettier, and bigger\npd.set_option('display.mpl_style', 'default')\n# matplotlib.style.use('ggplot')\nplt.rcParams['figure.figsize'] = (15, 5)\nplt.rcParams['font.family'] = 'sans-serif'\n\n# This is necessary to show lots of columns in pandas 0.12. \n# Not necessary in pandas 0.13.\npd.set_option('display.width', 5000) \npd.set_option('display.max_columns', 60)\n\n# Load data\nweather_2012 = pd.read_csv('./data/weather_2012.csv',\n parse_dates=True,\n index_col='Date/Time')\n# Data Preprocessing\ntemperature = weather_2012['Temp (C)'].resample('M', how=np.median) #采样频率 1个月,统计 中位数\nis_snowing = weather_2012['Weather'].str.contains('Snow') #统计 字符串 的值是否包含某个字段,\nsnowiness = is_snowing.astype(float).resample('M', how=np.mean) #采样频率 1个月,统计 平均值\n# Rename the Dataframe\ntemperature.name = \"Temperature\"\nsnowiness.name = \"Snowiness\"\n# Concat all of the Dataframe\nstats = pd.concat([temperature, snowiness], axis=1)\n# Plot\nstats.plot(kind='bar', subplots=True, figsize=(15, 10))",
"/opt/anaconda3/envs/python27/lib/python2.7/site-packages/ipykernel_launcher.py:22: FutureWarning: how in .resample() is deprecated\nthe new syntax is .resample(...)..apply(<func>)\n/opt/anaconda3/envs/python27/lib/python2.7/site-packages/ipykernel_launcher.py:24: FutureWarning: how in .resample() is deprecated\nthe new syntax is .resample(...)..apply(<func>)\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0d8e364d5c28f59feb867306bfb53c43e15be1a | 170,115 | ipynb | Jupyter Notebook | MAIN-AA2AR.ipynb | SABS-R3-projects/JA-ML | 6d43c1a15befde877c89203218d502e9d8a4b01f | [
"BSD-3-Clause"
] | null | null | null | MAIN-AA2AR.ipynb | SABS-R3-projects/JA-ML | 6d43c1a15befde877c89203218d502e9d8a4b01f | [
"BSD-3-Clause"
] | 1 | 2019-12-10T10:24:19.000Z | 2019-12-10T10:24:19.000Z | MAIN-AA2AR.ipynb | SABS-R3-projects/JA-ML | 6d43c1a15befde877c89203218d502e9d8a4b01f | [
"BSD-3-Clause"
] | null | null | null | 170,115 | 170,115 | 0.883156 | [
[
[
"# Machine Learning - AA2AR\n",
"_____no_output_____"
],
[
"**By Jakke Neiro & Andrei Roibu** ",
"_____no_output_____"
],
[
"## 1. Importing All Required Dependencies",
"_____no_output_____"
],
[
"This script imports all the required dependencies for running the different functions and the codes. Also, by using the _run_ command, the various notebooks are imprted into the main notebook.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom scipy import interp\n\nimport glob, os\n\nfrom sklearn.metrics import accuracy_score, roc_curve, roc_auc_score, auc, roc_auc_score, confusion_matrix, classification_report, log_loss\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis, QuadraticDiscriminantAnalysis\n\nfrom sklearn import svm, datasets, tree\nfrom sklearn.ensemble import RandomForestClassifier\n\n\nfrom sklearn.neural_network import MLPClassifier, MLPRegressor\nfrom sklearn.preprocessing import StandardScaler\n",
"_____no_output_____"
]
],
[
[
"In order to take advantage of the speed increases provided by GPUs, this code has been modified in order to run on Google Colab notebooks. In order to do this, the user needs to set the *google_colab_used* parameter to **True**. For usage on a local machine, this needs to be set to **False**\n\nIf used on a google colab notebook, the user will need to follow the instructions to generate and then copy an autorisation code from a generated link.\n",
"_____no_output_____"
]
],
[
[
"google_colab_used = True\n\nif google_colab_used == True:\n # Load the Drive helper and mount\n from google.colab import drive\n\n # This will prompt for authorization.\n drive.mount('/content/drive')\n\n data_drive = '/content/drive/My Drive/JA-ML/data'\n\n os.chdir(data_drive)\n\nelse:\n os.chdir(\"./data\")",
"Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n\nEnter your authorization code:\n··········\nMounted at /content/drive\n"
]
],
[
[
"## 2. Data Pre-Processing",
"_____no_output_____"
],
[
"This section imports all the required datasets as pandas dataframes, concatenates them, after which it pre-processes them by eliminating all non-numerical data and columns which contain the same data-values. This script also creates the input dataset and the labeled output dataset.",
"_____no_output_____"
]
],
[
[
"def data_preprocessing():\n \n '''\n This reads all the input datasets, pre-processes them and then generates the input dataset and the labelled dataset.\n \n Args:\n None\n \n Returns:\n X (ndarray): A 2D array containing the input processed data\n y (ndarray): A 1D array containing a list of labels, with 1 corresponding to \"active\" and 0 corresponding to \"dummy\"\n \n '''\n \n df_list = []\n y = np.array([])\n for file in glob.glob(\"aa2ar*.csv\"):\n df = pd.read_csv(file, header = 0)\n\n cols = df.shape[0]\n if \"actives\" in file:\n y_df = np.ones((cols))\n else:\n y_df = np.zeros((cols))\n y = np.concatenate((y,y_df), axis=0)\n\n df_list.append(df)\n\n global_df = pd.concat(df_list, axis=0, ignore_index=True)\n global_df = global_df._get_numeric_data() # removes any non-numeric data\n global_df = global_df.loc[:, (global_df != global_df.iloc[0]).any()] # modifies the dataframe to remove columns with only 0s\n\n X_headers = list(global_df.columns.values)\n X = global_df.values\n \n return X,y",
"_____no_output_____"
],
[
"X,y = data_preprocessing()",
"_____no_output_____"
],
[
"def data_split(X,y,random_state=42):\n \n '''\n This function takes the original datasets and splits them into training and testing datasets. For consistency, the function employs a 80-20 split for the train and test sets.\n \n Args:\n X (ndarray): A 2D array containing the input processed data\n y (ndarray): A 1D array containing a list of labels, with 1 corresponding to \"active\" and 0 corresponding to \"dummy\"\n random_state (int): An integer, representing the seed to be used by the random number generator; if not provided, the default value goes to 42\n \n Returns:\n X_train (ndarray): 2D array of input dataset used for training\n X_test (ndarray): 2D array of input dataset used for testing\n y_train (ndarray): 1D array of train labels \n y_test (ndarray): 1D array of test labels \n \n '''\n\n X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=random_state)\n \n return X_train, X_test, y_train, y_test",
"_____no_output_____"
],
[
"X_train, X_test, y_train, y_test = data_split(X,y)",
"_____no_output_____"
]
],
[
[
"## 3. Model Evaluation\n",
"_____no_output_____"
],
[
"This section produces the ROC plot, as well as several other performance metrics, including the classifier scores, the log-loss for each classifier, the confusion matrix and the classification report including the f1 score. The f1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0.",
"_____no_output_____"
]
],
[
[
"def ROC_plotting(title, y_test, y_score):\n \n '''\n This function generates the ROC plot for a given model.\n \n Args:\n title (string): String represending the name of the model.\n y_test (ndarray): 1D array of test dataset \n y_score (ndarray): 1D array of model-predicted labels\n \n Returns:\n ROC Plot\n \n '''\n \n\n fpr, tpr, _ = roc_curve(y_test, y_score)\n roc_auc = auc(fpr, tpr)\n \n plt.figure()\n lw = 2 # linewidth\n plt.plot(fpr, tpr, color='darkorange', lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)\n plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')\n plt.xlim([0.0, 1.0])\n plt.ylim([0.0, 1.05])\n plt.xlabel('False Positive Rate')\n plt.ylabel('True Positive Rate')\n plt.title(title)\n plt.legend(loc=\"lower right\")\n plt.show()",
"_____no_output_____"
],
[
"def performance_evaluation(X_train, X_test, y_train, y_test, predicted_train, predicted_test, y_score, title=\"model\"):\n \n '''\n This function prints the results of the different classifiers,a s well as several performance metrics\n \n Args:\n X_train (ndarray): 2D array of input dataset used for training\n X_test (ndarray): 2D array of input dataset used for testing\n y_train (ndarray): 1D array of train labels \n y_test (ndarray): 1D array of test labels \n title (string): the classifier name\n predicted_train (ndarray): 1D array of model-predicted labels for the train dataset \n predicted_test (ndarray): 1D array of model-predicted labels for the test dataset\n \n Returns:\n ROC Plot\n \n '''\n \n print(\"For the \", title, \" classifier:\")\n print(\"Training set score: %f\" % accuracy_score(y_train,predicted_train ))\n print(\"Training log-loss: %f\" % log_loss(y_train, predicted_train))\n print(\"Training set confusion matrix:\")\n print(confusion_matrix(y_train,predicted_train))\n print(\"Training set classification report:\")\n print(classification_report(y_train,predicted_train))\n \n print(\"Test set score: %f\" % accuracy_score(y_test, predicted_test))\n print(\"Test log-loss: %f\" % log_loss(y_test, predicted_test))\n print(\"Test set confusion matrix:\")\n print(confusion_matrix(y_test,predicted_test))\n print(\"Test set classification report:\")\n print(classification_report(y_test,predicted_test))\n\n ROC_plotting(\"ROC for \"+ title,y_test, y_score)\n fpr, tpr, _ = roc_curve(y_test, y_score)\n roc_auc = auc(fpr, tpr)\n print(\"AUC:\" + str(roc_auc))",
"_____no_output_____"
],
[
"def model_evaluation(function_name, X_train, X_test, y_train, y_test, title):\n\n '''\n This function evaluates the propoesed model the results of the different classifiers,a s well as several performance metrics\n\n Args:\n function_name (function): the function describing the employed model\n X_train (ndarray): 2D array of input dataset used for training\n X_test (ndarray): 2D array of input dataset used for testing\n y_train (ndarray): 1D array of train labels \n y_test (ndarray): 1D array of test labels \n title (string): the classifier name\n \n Returns:\n ROC Plot\n \n '''\n\n if title == 'Neural Network':\n y_predicted_train, y_predicted_test, y_score = neural_network(X_train, X_test, y_train, y_test)\n else:\n y_predicted_train, y_predicted_test, y_score = function_name(X_train, y_train, X_test)\n\n performance_evaluation(X_train, X_test, y_train, y_test, y_predicted_train, y_predicted_test, y_score, title)\n",
"_____no_output_____"
],
[
"def multiple_model_evaluation(function_name, X, y, title):\n\n '''\n This function takes the proposed model and original datasets and evaluates the proposed model by splitting the datasets randomly for 5 times.\n \n Args:\n function_name (function): the function describing the employed model\n X (ndarray): A 2D array containing the input processed data\n y (ndarray): A 1D array containing a list of labels, with 1 corresponding to \"active\" and 0 corresponding to \"dummy\"\n title (string): the classifier name\n\n '''\n\n random_states = [1, 10, 25, 42, 56]\n\n test_set_scores = []\n test_log_losses = []\n roc_aucs = []\n\n for random_state in random_states:\n X_train, X_test, y_train, y_test = data_split(X,y,random_state=random_state)\n\n if title == 'Neural Network':\n y_predicted_train, y_predicted_test, y_score = neural_network(X_train, X_test, y_train, y_test)\n else:\n y_predicted_train, y_predicted_test, y_score = function_name(X_train, y_train, X_test)\n\n test_set_score = accuracy_score(y_test, y_predicted_test)\n test_log_loss = log_loss(y_test, y_predicted_test)\n\n fpr, tpr, _ = roc_curve(y_test, y_score)\n roc_auc = auc(fpr, tpr)\n\n test_set_scores.append(test_set_score)\n test_log_losses.append(test_log_loss)\n roc_aucs.append(roc_auc)\n\n print(\"The average test set score for \", title, \"is: \", str(np.mean(test_set_scores)))\n print(\"The average test log-loss for \", title, \"is: \", str(np.mean(test_log_loss)))\n print(\"The average AUC for \", title, \"is: \", str(np.mean(roc_auc)))\n \n",
"_____no_output_____"
]
],
[
[
"## 4. Logistic regression, linear and quadratic discriminant analysis",
"_____no_output_____"
],
[
"### 4.1. Logistic regression",
"_____no_output_____"
],
[
"Logistic regression (logit regression, log-liner classifier) is a generalized linear model used for classification that uses a log-linear link function to model the outcome of a binary reponse variable $\\mathbf{y}$ using a single or multiple predictors $\\mathbf{X}$. Mathematically, the logistic regression primarily computes the probability of the value of a response variable given a value of the predictor, and this probability is then used for predicting the most probable outcome. The logistic regression has several advantages: it is easy to implement, it is efficient to train and it does not require input features to be scaled. However, the logistic regression can only produce a non-linear decision boundary. Therefore, with a complex dataset as ours, we do not expect it to perform particularly well.",
"_____no_output_____"
]
],
[
[
"def LogReg(X_train, y_train, X_test):\n \"\"\"Classification using logistic regression \n\n Args:\n X_train: Predictor or feature values used for training\n y_train: Response values used for training\n X_test: Predictor or feature values used for predicting the response values using the classifier\n\n Returns:\n y_predicted: The predicted response values\n\n \"\"\"\n scaler = StandardScaler()\n scaler.fit(X_train)\n X_train = scaler.transform(X_train)\n X_test = scaler.transform(X_test)\n \n #Define and train the model\n classifier = LogisticRegression(max_iter = 500).fit(X_train, y_train)\n \n #Predict the response values using the test predictor data\n y_predicted_test = classifier.predict(X_test)\n y_predicted_train = classifier.predict(X_train)\n y_score = classifier.predict_proba(X_test)[:,1]\n return y_predicted_train, y_predicted_test, y_score",
"_____no_output_____"
],
[
"model_evaluation(LogReg, X_train, X_test, y_train, y_test, title='Logistic Regression')",
"For the Logistic Regression classifier:\nTraining set score: 0.996990\nTraining log-loss: 0.103952\nTraining set confusion matrix:\n[[25170 23]\n [ 54 337]]\nTraining set classification report:\n precision recall f1-score support\n\n 0.0 1.00 1.00 1.00 25193\n 1.0 0.94 0.86 0.90 391\n\n accuracy 1.00 25584\n macro avg 0.97 0.93 0.95 25584\nweighted avg 1.00 1.00 1.00 25584\n\nTest set score: 0.994684\nTest log-loss: 0.183603\nTest set confusion matrix:\n[[6294 11]\n [ 23 68]]\nTest set classification report:\n precision recall f1-score support\n\n 0.0 1.00 1.00 1.00 6305\n 1.0 0.86 0.75 0.80 91\n\n accuracy 0.99 6396\n macro avg 0.93 0.87 0.90 6396\nweighted avg 0.99 0.99 0.99 6396\n\n"
],
[
"multiple_model_evaluation(LogReg, X, y, title='Logistic Regression')",
"The average test set score for Logistic Regression is: 0.9956535334584116\nThe average test log-loss for Logistic Regression is: 0.12960241248550722\nThe average AUC for Logistic Regression is: 0.998101599065739\n"
]
],
[
[
"### 4.2. Linear discriminant analysis",
"_____no_output_____"
],
[
"LDA employs Bayes' theorem to fit a Gaussian density to each class of data. The classes are assumed to have the same covariance matrix. This generates a linear decision boundry. ",
"_____no_output_____"
]
],
[
[
"def LDA(X_train, y_train, X_test):\n \n \"\"\"Classification using LDA \n\n Args:\n X_train: Predictor or feature values used for training\n y_train: Response values used for training\n X_test: Predictor or feature values used for predicting the response values using the classifier\n\n Returns:\n y_predicted_train: The predicted response values for the training dataset\n y_predicted_test: The predicted response values for the test dataset\n\n \"\"\"\n \n classifier = LinearDiscriminantAnalysis()\n classifier = classifier.fit(X_train, y_train)\n y_predicted_test = classifier.predict(X_test)\n y_predicted_train = classifier.predict(X_train)\n y_score = classifier.predict_proba(X_test)[:,1]\n return y_predicted_train, y_predicted_test, y_score",
"_____no_output_____"
],
[
"model_evaluation(LDA, X_train, X_test, y_train, y_test, title='Linear Discriminant')",
"For the Linear Discriminant classifier:\nTraining set score: 0.992886\nTraining log-loss: 0.245707\nTraining set confusion matrix:\n[[25067 126]\n [ 56 335]]\nTraining set classification report:\n precision recall f1-score support\n\n 0.0 1.00 0.99 1.00 25193\n 1.0 0.73 0.86 0.79 391\n\n accuracy 0.99 25584\n macro avg 0.86 0.93 0.89 25584\nweighted avg 0.99 0.99 0.99 25584\n\nTest set score: 0.992495\nTest log-loss: 0.259206\nTest set confusion matrix:\n[[6278 27]\n [ 21 70]]\nTest set classification report:\n precision recall f1-score support\n\n 0.0 1.00 1.00 1.00 6305\n 1.0 0.72 0.77 0.74 91\n\n accuracy 0.99 6396\n macro avg 0.86 0.88 0.87 6396\nweighted avg 0.99 0.99 0.99 6396\n\n"
],
[
"multiple_model_evaluation(LDA, X, y, title='Linear Discriminant')",
"The average test set score for Linear Discriminant is: 0.9924015009380863\nThe average test log-loss for Linear Discriminant is: 0.17820456731990192\nThe average AUC for Linear Discriminant is: 0.9866419619481748\n"
]
],
[
[
"### 4.3. Quadratic discriminant analysis",
"_____no_output_____"
],
[
"QDA is similar to LDA, however it employs a quadratic decision boundary, rather than a linear one.",
"_____no_output_____"
]
],
[
[
"def QDA(X_train, y_train, X_test):\n \"\"\"Classification using QDA \n\n Args:\n X_train: Predictor or feature values used for training\n y_train: Response values used for training\n X_test: Predictor or feature values used for predicting the response values using the classifier\n\n Returns:\n y_predicted_train: The predicted response values for the training dataset\n y_predicted_test: The predicted response values for the test dataset\n\n \"\"\"\n classifier = QuadraticDiscriminantAnalysis()\n classifier = classifier.fit(X_train, y_train)\n y_predicted_test = classifier.predict(X_test)\n y_predicted_train = classifier.predict(X_train)\n y_score = classifier.predict_proba(X_test)[:,1]\n return y_predicted_train, y_predicted_test, y_score",
"_____no_output_____"
],
[
"model_evaluation(QDA, X_train, X_test, y_train, y_test, title='Quadratic Discriminant')",
"For the Quadratic Discriminant classifier:\nTraining set score: 0.015518\nTraining log-loss: 34.003608\nTraining set confusion matrix:\n[[ 7 25186]\n [ 1 390]]\nTraining set classification report:\n precision recall f1-score support\n\n 0.0 0.88 0.00 0.00 25193\n 1.0 0.02 1.00 0.03 391\n\n accuracy 0.02 25584\n macro avg 0.45 0.50 0.02 25584\nweighted avg 0.86 0.02 0.00 25584\n\nTest set score: 0.014853\nTest log-loss: 34.026559\nTest set confusion matrix:\n[[ 4 6301]\n [ 0 91]]\nTest set classification report:\n precision recall f1-score support\n\n 0.0 1.00 0.00 0.00 6305\n 1.0 0.01 1.00 0.03 91\n\n accuracy 0.01 6396\n macro avg 0.51 0.50 0.01 6396\nweighted avg 0.99 0.01 0.00 6396\n\n"
],
[
"multiple_model_evaluation(QDA, X, y, title='Quadratic Discriminant')",
"The average test set score for Quadratic Discriminant is: 0.015916197623514698\nThe average test log-loss for Quadratic Discriminant is: 34.05355944460625\nThe average AUC for Quadratic Discriminant is: 0.5002377555872564\n"
]
],
[
[
"## 5. Decision trees and random forest",
"_____no_output_____"
],
[
"### 5.1. Single decision tree",
"_____no_output_____"
],
[
"Decision trees are a non-parametric learning method used for both classification and regression. The advantages of decision trees are that they are easy to understand and they can be used for a broad range of data. However, the main disadvantages are that a single decision tree is easily overfitted and hence even small perturbations in the data might result in a markedly different classifier. This problem is tackled by generating several decision trees for deriving the final classifier. Here, we first train a single decision tree before we looking into more sophisticated ensemble methods.",
"_____no_output_____"
],
[
"We fit a single decision tree with default parameters and predict the values of $\\mathbf{y}$ based on the test data.",
"_____no_output_____"
]
],
[
[
"def DecisionTree(X_train, y_train, X_test):\n \n \"\"\"Classification using Decision Tree \n\n Args:\n X_train: Predictor or feature values used for training\n y_train: Response values used for training\n X_test: Predictor or feature values used for predicting the response values using the classifier\n\n Returns:\n y_predicted_train: The predicted response values for the training dataset\n y_predicted_test: The predicted response values for the test dataset\n\n \"\"\"\n \n classifier = tree.DecisionTreeClassifier()\n classifier = classifier.fit(X_train, y_train)\n y_predicted_test = classifier.predict(X_test)\n y_predicted_train = classifier.predict(X_train)\n y_score = classifier.predict_proba(X_test)[:,1]\n return y_predicted_train, y_predicted_test, y_score",
"_____no_output_____"
],
[
"model_evaluation(DecisionTree, X_train, X_test, y_train, y_test, title='Decision Tree')",
"For the Decision Tree classifier:\nTraining set score: 1.000000\nTraining log-loss: 0.000000\nTraining set confusion matrix:\n[[25193 0]\n [ 0 391]]\nTraining set classification report:\n precision recall f1-score support\n\n 0.0 1.00 1.00 1.00 25193\n 1.0 1.00 1.00 1.00 391\n\n accuracy 1.00 25584\n macro avg 1.00 1.00 1.00 25584\nweighted avg 1.00 1.00 1.00 25584\n\nTest set score: 0.994059\nTest log-loss: 0.205205\nTest set confusion matrix:\n[[6286 19]\n [ 19 72]]\nTest set classification report:\n precision recall f1-score support\n\n 0.0 1.00 1.00 1.00 6305\n 1.0 0.79 0.79 0.79 91\n\n accuracy 0.99 6396\n macro avg 0.89 0.89 0.89 6396\nweighted avg 0.99 0.99 0.99 6396\n\n"
],
[
"multiple_model_evaluation(DecisionTree, X, y, title='Decision Tree')",
"The average test set score for Decision Tree is: 0.9939337085678549\nThe average test log-loss for Decision Tree is: 0.1998053027747743\nThe average AUC for Decision Tree is: 0.9290531861981515\n"
]
],
[
[
"### 5.2. Random forest",
"_____no_output_____"
],
[
"Radnom forest explanation...",
"_____no_output_____"
]
],
[
[
"def RandomForest(X_train, y_train, X_test):\n \n \"\"\"Classification using Random Forest \n\n Args:\n X_train: Predictor or feature values used for training\n y_train: Response values used for training\n X_test: Predictor or feature values used for predicting the response values using the classifier\n\n Returns:\n y_predicted_train: The predicted response values for the training dataset\n y_predicted_test: The predicted response values for the test dataset\n\n \"\"\"\n \n rf_classifier = RandomForestClassifier(n_estimators=200)\n rf_classifier = rf_classifier.fit(X_train, y_train)\n y_predicted_test = rf_classifier.predict(X_test)\n y_predicted_train = rf_classifier.predict(X_train)\n y_score = rf_classifier.predict_proba(X_test)[:,1]\n return y_predicted_train, y_predicted_test, y_score",
"_____no_output_____"
],
[
"model_evaluation(RandomForest, X_train, X_test, y_train, y_test, title='Random Forest')",
"For the Random Forest classifier:\nTraining set score: 1.000000\nTraining log-loss: 0.000000\nTraining set confusion matrix:\n[[25193 0]\n [ 0 391]]\nTraining set classification report:\n precision recall f1-score support\n\n 0.0 1.00 1.00 1.00 25193\n 1.0 1.00 1.00 1.00 391\n\n accuracy 1.00 25584\n macro avg 1.00 1.00 1.00 25584\nweighted avg 1.00 1.00 1.00 25584\n\nTest set score: 0.995935\nTest log-loss: 0.140402\nTest set confusion matrix:\n[[6305 0]\n [ 26 65]]\nTest set classification report:\n precision recall f1-score support\n\n 0.0 1.00 1.00 1.00 6305\n 1.0 1.00 0.71 0.83 91\n\n accuracy 1.00 6396\n macro avg 1.00 0.86 0.92 6396\nweighted avg 1.00 1.00 1.00 6396\n\n"
],
[
"multiple_model_evaluation(RandomForest, X, y, title='Random Forest')",
"The average test set score for Random Forest is: 0.9964040025015635\nThe average test log-loss for Random Forest is: 0.10800117696970293\nThe average AUC for Random Forest is: 0.9960456053475877\n"
]
],
[
[
"## 6. Neural Network",
"_____no_output_____"
],
[
"A Neural Network, also known as a multi-layered perceptron, is a supervised learning algorithm that learns a function, which is trained using a set of features and targets. A neural network can learns a non-linear function approximator, allowing classification of data. Between the input and output layers, there are a set of non-linear hidden layers. The advantages of a neural network are it's ability to learn non-linear models and perform learning in real-time. However, a NN can suffer from different validation accuracy induced by random weight initialization, has a large number of hyper parameters which require tunning and is sensitive to feature scaling. \n\nThe neural_network function below makes use of inbuilt MLPClassifier, which implements a multi-layer perceptron (MLP) algorithm that trains using Backpropagation.\n\nAs MLPs are sensitive to feature scaling, the data is scaled using the built-in StandardScaler for standardization. The same scaling s applied to the test set for meaningful results.\n\nMost of the MLPClassifier's parameters where left to random. However, several were modifed in order to enhance performance. Firstly, the solver was set to _adam_, which reffers to a stochastic gradient-based optimizer, the alpha regularization parameter was set to 1e-5, the number of hidden layers was set to 2, each with 70 neurons (numbers determined through experimentation throughout the day), and the max_iterations was set to 1500.",
"_____no_output_____"
]
],
[
[
"def neural_network(X_train, X_test, y_train, y_test):\n \n '''\n This function takes in the input datasets, creates a neural network, trains and then tests it.\n \n Written by AndreiRoibu\n \n Args:\n X_train (ndarray): 2D array of input dataset used for training\n X_test (ndarray): 2D array of input dataset used for testing\n y_train (ndarray): 1D array of train labels \n y_test (ndarray): 1D array of test labels \n \n Returns:\n predicted_train (ndarray): 1D array of model-predicted labels for the train dataset \n predicted_test (ndarray): 1D array of model-predicted labels for the test dataset\n \n '''\n \n scaler = StandardScaler()\n scaler.fit(X_train)\n X_train = scaler.transform(X_train)\n X_test = scaler.transform(X_test)\n \n classifier = MLPClassifier(solver='adam', alpha=1e-5, hidden_layer_sizes=(70,70), random_state=1, max_iter=1500)\n \n classifier.fit(X_train, y_train)\n \n predicted_train = classifier.predict(X_train)\n predicted_test = classifier.predict(X_test)\n y_score = classifier.predict_proba(X_test)[:,1]\n return predicted_train, predicted_test, y_score",
"_____no_output_____"
],
[
"model_evaluation(neural_network, X_train, X_test, y_train, y_test, title='Neural Network')",
"For the Neural Network classifier:\nTraining set score: 1.000000\nTraining log-loss: 0.000000\nTraining set confusion matrix:\n[[25193 0]\n [ 0 391]]\nTraining set classification report:\n precision recall f1-score support\n\n 0.0 1.00 1.00 1.00 25193\n 1.0 1.00 1.00 1.00 391\n\n accuracy 1.00 25584\n macro avg 1.00 1.00 1.00 25584\nweighted avg 1.00 1.00 1.00 25584\n\nTest set score: 0.997342\nTest log-loss: 0.091802\nTest set confusion matrix:\n[[6299 6]\n [ 11 80]]\nTest set classification report:\n precision recall f1-score support\n\n 0.0 1.00 1.00 1.00 6305\n 1.0 0.93 0.88 0.90 91\n\n accuracy 1.00 6396\n macro avg 0.96 0.94 0.95 6396\nweighted avg 1.00 1.00 1.00 6396\n\n"
],
[
"multiple_model_evaluation(neural_network, X, y, title='Neural Network')",
"The average test set score for Neural Network is: 0.9978111319574735\nThe average test log-loss for Neural Network is: 0.08100175783390838\nThe average AUC for Neural Network is: 0.9996957457235878\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0d8f6d4ec671e2ab85a7ae333c5c980e3a26335 | 730,865 | ipynb | Jupyter Notebook | 2019/lecture-code/lecture 8 - Multiple Comparisons.ipynb | notconfusing/design-governance-experiments | 238c0c41c37ec8e5edd8e41727ddf1dafb2ff539 | [
"MIT"
] | 8 | 2018-02-08T23:24:28.000Z | 2020-09-29T04:41:57.000Z | 2019/lecture-code/lecture 8 - Multiple Comparisons.ipynb | notconfusing/design-governance-experiments | 238c0c41c37ec8e5edd8e41727ddf1dafb2ff539 | [
"MIT"
] | null | null | null | 2019/lecture-code/lecture 8 - Multiple Comparisons.ipynb | notconfusing/design-governance-experiments | 238c0c41c37ec8e5edd8e41727ddf1dafb2ff539 | [
"MIT"
] | 8 | 2018-02-11T02:50:12.000Z | 2019-08-08T12:09:45.000Z | 714.43304 | 154,566 | 0.939283 | [
[
[
"# Lecture 8: p-hacking and Multiple Comparisons\n[J. Nathan Matias](https://github.com/natematias)\n[SOC412](https://natematias.com/courses/soc412/), February 2019\n\nIn Lecture 8, we discussed Stephanie Lee's story about [Brian Wansink](https://www.buzzfeednews.com/article/stephaniemlee/brian-wansink-cornell-p-hacking#.btypwrDwe5), a food researcher who was found guilty of multiple kinds of research misconduct, including \"p-hacking,\" where researchers keep looking for an answer until they find one. In this lecture, we will discuss what p-hacking is and what researchers can do to protect against it in our own work. \n\nThis example uses the [DeclareDesign](http://declaredesign.org/) library, which supports the simulation and evaluation of experiment designs. We will be using DeclareDesign to help with designing experiments in this class.\n\nWhat can you do in your research to protect yourself against the risk of p-hacking or against reductions in the credibility of your research if people accuse you of p-hacking?\n* Conduct a **power analysis** to choose a sample size that is large enough to observe the effect you're looking for (see below)\n* If you have multiple statistical tests in each experiment, [adjust your analysis for multiple comparisons](https://egap.org/methods-guides/10-things-you-need-know-about-multiple-comparisons).\n* [Pre-register](https://cos.io/prereg/) your study, being clear about whether your research is exploratory or confirmatory, and committing in advance to the statistical tests you're using to analyze the results\n* Use cross-validation with training and holdout samples to take an exploratory + confirmatory approach (requires a much larger sample size, typically greater than 2x)",
"_____no_output_____"
],
[
"# Load Libraries",
"_____no_output_____"
]
],
[
[
"options(\"scipen\"=9, \"digits\"=4)\nlibrary(dplyr)\nlibrary(MASS)\nlibrary(ggplot2)\nlibrary(rlang)\nlibrary(corrplot)\nlibrary(Hmisc)\nlibrary(tidyverse)\nlibrary(viridis)\nlibrary(fabricatr)\nlibrary(DeclareDesign)\n## Installed DeclareDesign 0.13 using the following command:\n# install.packages(\"DeclareDesign\", dependencies = TRUE,\n# repos = c(\"http://R.declaredesign.org\", \"https://cloud.r-project.org\"))\noptions(repr.plot.width=7, repr.plot.height=4)\n\nset.seed(03456920)\n\nsessionInfo()",
"_____no_output_____"
]
],
[
[
"# What is a p-value? \nA p-value (which can be calculated differently for different kinds of statistical tests) is an estimate of the probability of rejecting a null hypothesis. When testing differences in means, we are usually testing the null hypothesis of no difference between the two distributions. In those cases, the p-value is the probability of observing a difference between the distributions that is at least as extreme as the one observed.\n\nYou can think of the p-value as the probability represented by the area under the following t distribution of all of the possible outcomes for a given difference between means if the null hypothesis is true:\n\n",
"_____no_output_____"
],
[
"### Illustrating The Null Hypothesis\nIn the following case, I generate 100 sets of normal distributions with exactly the same mean and standard deviation, and then plot the differences between those means:",
"_____no_output_____"
]
],
[
[
"### GENERATE n.samples simulations at n.sample.size observations\n### using normal distributions at the specified means\n### and record the difference in means and the p value of the observations\n#\n# `@diff.df: the dataframe to pass in\n# `@n.sample.size: the sample sizes to draw from a normal distribution\ngenerate.n.samples <- function(diff.df, n.sample.size = 500){\n for(i in seq(nrow(diff.df))){\n row = diff.df[i,]\n a.dist = rnorm(n.sample.size, mean = row$a.mean, sd = row$a.sd)\n b.dist = rnorm(n.sample.size, mean = row$b.mean, sd = row$a.sd)\n t <- t.test(a.dist, b.dist)\n\n diff.df[i,]$p.value <- t$p.value\n diff.df[i,]$mean.diff <- mean(b.dist) - mean(a.dist)\n }\n diff.df\n}",
"_____no_output_____"
],
[
"#expand.grid\nn.samples = 1000\nnull.hypothesis.df = data.frame(a.mean = 1, a.sd = 1, \n b.mean = 1, b.sd = 1,\n id=seq(n.samples), \n mean.diff = NA,\n p.value = NA)\n\nnull.hypothesis.df <- generate.n.samples(null.hypothesis.df, 200)",
"_____no_output_____"
],
[
"ggplot(null.hypothesis.df, aes(mean.diff)) +\n geom_histogram(binwidth=0.01) +\n xlim(-1.2,1.2) +\n ggtitle(\"Simulated Differences in means under the null hypothesis\")",
"Warning message:\n“Removed 2 rows containing missing values (geom_bar).”"
],
[
"ggplot(null.hypothesis.df, aes(mean.diff, p.value, color=factor(p.value < 0.05))) +\n geom_point() +\n geom_hline(yintercept = 0.05) + \n ggtitle(\"Simulated p values under the null hypothesis\")",
"_____no_output_____"
],
[
"print(\"How often is the p-value < 0.05?\")\nsummary(null.hypothesis.df$p.value > 0.05)",
"[1] \"How often is the p-value < 0.05?\"\n"
]
],
[
[
"### Illustrating A Difference in Means (first with a small sample size)",
"_____no_output_____"
]
],
[
[
"#expand.grid\nsmall.sample.diff.df = data.frame(a.mean = 1, a.sd = 1, \n b.mean = 1.2, b.sd = 1,\n id=seq(n.samples), \n mean.diff = NA,\n p.value = NA)\n\nsmall.sample.diff.df <- generate.n.samples(small.sample.diff.df, 20)",
"_____no_output_____"
],
[
"ggplot(small.sample.diff.df, aes(mean.diff)) +\n geom_histogram(binwidth=0.01) +\n xlim(-1.2,1.2) +\n ggtitle(\"Simulated Differences in means under the a diff in means of 1 (n=20)\")",
"Warning message:\n“Removed 2 rows containing missing values (geom_bar).”"
],
[
"ggplot(small.sample.diff.df, aes(mean.diff, p.value, color=factor(p.value < 0.05))) +\n geom_point() +\n geom_hline(yintercept = 0.05) + \n ggtitle(\"Simulated p values under a diff in means of 0.2 (n = 20)\")",
"_____no_output_____"
],
[
"print(\"How often is the p-value < 0.05?\")\nsummary(small.sample.diff.df$p.value > 0.05)",
"[1] \"How often is the p-value < 0.05?\"\n"
],
[
"print(\"How often is the p-value < 0.05? when the estimate is < 0 (false positive)?\")\nnrow(subset(small.sample.diff.df, mean.diff<0 &p.value < 0.05))",
"[1] \"How often is the p-value < 0.05? when the estimate is < 0 (false positive)?\"\n"
],
[
"print(\"How often is the p-value >= 0.05 when the estimate is 0.2 or greater (false negative)?\")\nprint(sprintf(\"%1.2f precent\", \n nrow(subset(small.sample.diff.df, mean.diff>=0.2 &p.value >= 0.05)) / \n nrow(small.sample.diff.df)*100))",
"[1] \"How often is the p-value >= 0.05 when the estimate is 0.2 or greater (false negative)?\"\n[1] \"38.70 precent\"\n"
],
[
"print(\"What is the smallest positive, statistically-significant result?\")\nsprintf(\"%1.2f, which is greater than the true difference of 0.2\", \n min(subset(small.sample.diff.df, mean.diff>0 & p.value < 0.05)$mean.diff))",
"[1] \"What is the smallest positive, statistically-significant result?\"\n"
],
[
"print(\"If we only published statistically-significant results, what we would we think the true effect would be?\")\nsprintf(\"%1.2f, which is greater than the true difference of 0.2\", \n mean(subset(small.sample.diff.df, p.value < 0.05)$mean.diff))",
"[1] \"If we only published statistically-significant results, what we would we think the true effect would be?\"\n"
],
[
"print(\"If we published all experiment results, what we would we think the true effect would be?\")\nsprintf(\"%1.2f, which is very close to the true difference of 0.2\", \n mean(small.sample.diff.df$mean.diff))",
"[1] \"If we published all experiment results, what we would we think the true effect would be?\"\n"
]
],
[
[
"### Illustrating A Difference in Means (with a larger sample size)",
"_____no_output_____"
]
],
[
[
"#expand.grid\nlarger.sample.diff.df = data.frame(a.mean = 1, a.sd = 1, \n b.mean = 1.2, b.sd = 1,\n id=seq(n.samples), \n mean.diff = NA,\n p.value = NA)\n\nlarger.sample.diff.df <- generate.n.samples(larger.sample.diff.df, 200)",
"_____no_output_____"
],
[
"ggplot(larger.sample.diff.df, aes(mean.diff)) +\n geom_histogram(binwidth=0.01) +\n xlim(-1.2,1.2) +\n ggtitle(\"Simulated Differences in means under the a diff in means of 1 (n=200)\")",
"Warning message:\n“Removed 2 rows containing missing values (geom_bar).”"
],
[
"ggplot(larger.sample.diff.df, aes(mean.diff, p.value, color=factor(p.value < 0.05))) +\n geom_point() +\n geom_hline(yintercept = 0.05) + \n ggtitle(\"Simulated p values under a diff in means of 0.2 (n = 200)\")",
"_____no_output_____"
],
[
"print(\"If we only published statistically-significant results, what we would we think the true effect would be?\")\nsprintf(\"%1.2f, which is greater than the true difference of 0.2\", \n mean(subset(larger.sample.diff.df, p.value < 0.05)$mean.diff))",
"[1] \"If we only published statistically-significant results, what we would we think the true effect would be?\"\n"
],
[
"print(\"How often is the p-value < 0.05?\")\nsprintf(\"%1.2f percent\", \n nrow(subset(larger.sample.diff.df,p.value < 0.05)) / nrow(larger.sample.diff.df)*100)",
"[1] \"How often is the p-value < 0.05?\"\n"
]
],
[
[
"### Illustrating a Difference in Means (with an adequately large sample size)",
"_____no_output_____"
]
],
[
[
"adequate.sample.diff.df = data.frame(a.mean = 1, a.sd = 1, \n b.mean = 1.2, b.sd = 1,\n id=seq(n.samples), \n mean.diff = NA,\n p.value = NA)\n\nadequate.sample.diff.df <- generate.n.samples(larger.sample.diff.df, 400)",
"_____no_output_____"
],
[
"ggplot(adequate.sample.diff.df, aes(mean.diff, p.value, color=factor(p.value < 0.05))) +\n geom_point() +\n geom_hline(yintercept = 0.05) + \n ggtitle(\"Simulated p values under a diff in means of 0.2 (n = 400)\")",
"_____no_output_____"
],
[
"print(\"How often is the p-value < 0.05?\")\nsprintf(\"%1.2f percent\", \n nrow(subset(adequate.sample.diff.df,p.value < 0.05)) / nrow(adequate.sample.diff.df)*100)",
"[1] \"How often is the p-value < 0.05?\"\n"
],
[
"print(\"If we only published statistically-significant results, what we would we think the true effect would be?\")\nsprintf(\"%1.2f, which is greater than the true difference of 0.2\", \n mean(subset(adequate.sample.diff.df, p.value < 0.05)$mean.diff))",
"[1] \"If we only published statistically-significant results, what we would we think the true effect would be?\"\n"
]
],
[
[
"# The Problem of Multiple Comparisons\nIn the above example, I demonstrated that across 100 samples under the null hypothesis and a decision rule of p = 0.05, roughly 5% of the results are statistically significant. This is similarly true for a single experiment with multiple outcome variables.",
"_____no_output_____"
]
],
[
[
"## Generate n normally distributed outcome variables with no difference on average\n#\n#` @num.samples: sample size for the dataframe\n#` @num.columns: how many outcome variables to observe\n#` @common.mean: the mean of the outcomes\n#` @common.sd: the standard deviation of the outcomes\n\ngenerate.n.outcomes.null <- function( num.samples, num.columns, common.mean, common.sd){\n df <- data.frame(id = seq(num.samples))\n for(i in seq(num.columns)){\n df[paste('row.',i,sep=\"\")] <- rnorm(num.samples, mean=common.mean, sd=common.sd)\n }\n df\n}",
"_____no_output_____"
]
],
[
[
"### With 10 outcome variables, if we look for correlations between every outcomes, we expect to see 5% false positives on average under the null hypothesis.",
"_____no_output_____"
]
],
[
[
"set.seed(487)\n## generate the data\nnull.10.obs <- generate.n.outcomes.null(100, 10, 1, 3)\nnull.10.obs$id <- NULL",
"_____no_output_____"
],
[
"null.correlations <- cor(null.10.obs, method=\"pearson\")\nnull.pvalues <- cor.mtest(null.10.obs, conf.level = 0.95, method=\"pearson\")$p\ncorrplot(cor(null.10.obs, method=\"pearson\"), sig.level = 0.05, p.mat = null.pvalues)",
"_____no_output_____"
]
],
[
[
"### With multiple comparisons, increasing the sample size does not make the problem go away. Here, we use a sample of 10000 instead of 100",
"_____no_output_____"
]
],
[
[
"null.10.obs.large <- generate.n.outcomes.null(10000, 10, 1, 3)\nnull.10.obs.large$id <- NULL\nnull.correlations <- cor(null.10.obs.large, method=\"pearson\")\nnull.pvalues <- cor.mtest(null.10.obs.large, conf.level = 0.95, method=\"pearson\")$p\ncorrplot(cor(null.10.obs.large, method=\"pearson\"), sig.level = 0.05, p.mat = null.pvalues)",
"_____no_output_____"
]
],
[
[
"# Power Analysis\nA power analysis is a process for deciding what sample size to use based on the chance of observing the minimum effect you are looking for in your study. This power analysis uses [DeclareDesign](http://declaredesign.org/). Another option is the [egap Power Analysis page.](https://egap.org/content/power-analysis-simulations-r)\n\n(we will discuss this in further detail in a subsequent class)",
"_____no_output_____"
]
],
[
[
"mean.a <- 0\neffect.b <- 0.1\nsample.size <- 500\n\ndesign <-\n declare_population(\n N = sample.size\n ) +\n declare_potential_outcomes(\n YA_Z_0 = rnorm(n=N, mean = mean.a, sd=1),\n YA_Z_1 = rnorm(n=N, mean = mean.a + effect.b, sd=1)\n ) +\n declare_assignment(num_arms = 2,\n conditions = (c(\"0\", \"1\"))) +\n declare_estimand(ate_YA_1_0 = effect.b) +\n declare_reveal(outcome_variables = c(\"YA\")) +\n declare_estimator(YA ~ Z, estimand=\"ate_YA_1_0\")",
"_____no_output_____"
],
[
"design",
"_____no_output_____"
],
[
"diagnose_design(design, sims=500, bootstrap_sims=500)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0d9326e61f183e92bf3d75c29b05dfe4220671c | 3,853 | ipynb | Jupyter Notebook | solutions/2019/kws/04.ipynb | kws/AdventOfCode | 8da337ece8a46d070185e1d81592745dae7f6744 | [
"MIT"
] | 1 | 2020-12-04T20:15:47.000Z | 2020-12-04T20:15:47.000Z | solutions/2019/kws/04.ipynb | kws/AdventOfCode | 8da337ece8a46d070185e1d81592745dae7f6744 | [
"MIT"
] | 1 | 2020-12-02T08:31:35.000Z | 2020-12-02T20:24:34.000Z | solutions/2019/kws/04.ipynb | kws/AdventOfCode | 8da337ece8a46d070185e1d81592745dae7f6744 | [
"MIT"
] | 3 | 2018-11-30T18:14:15.000Z | 2018-12-10T20:18:15.000Z | 24.232704 | 97 | 0.506359 | [
[
[
"def test_sequence(value):\n \"\"\" Number must have two consecutive digits \"\"\"\n value = str(value)\n for i in range(0, len(value)-1):\n if value[i] == value[i+1]:\n return True\n \n return False\n\nassert test_sequence(111111)\nassert not test_sequence(123456)\nassert test_sequence(123455)\nassert test_sequence(113456)\n\n",
"_____no_output_____"
],
[
"def test_increasing(value):\n \"\"\" Each digit must be larger than or equal to the preceeding one \"\"\"\n value = str(value)\n value = [int(c) for c in value]\n for i in range(0, len(value)-1):\n if value[i] > value[i+1]:\n return False\n\n return True\n \nassert test_increasing(111111)\nassert test_increasing(123456)\nassert not test_increasing(123454)\n",
"_____no_output_____"
],
[
"def test_all(value):\n return test_sequence(value) and test_increasing(value)\n\nassert test_all(111111)\nassert not test_all(223450)\nassert not test_all(123789)\nassert not test_all(109166)\n",
"_____no_output_____"
],
[
"with open(\"04-input.txt\", \"rt\") as FILE:\n value = FILE.read()\n \nvalue = value.split(\"-\")\nvalue = [int(v) for v in value]\n\nvalues = []\nfor v in range(value[0], value[1]+1):\n if test_all(v):\n values.append(v)\n \nlen(values)",
"_____no_output_____"
]
],
[
[
"# Part 2",
"_____no_output_____"
]
],
[
[
"def test_must_have_double(value):\n \"\"\" Must have at least one pair of digits \"\"\"\n value = str(value)\n for v in set(value):\n if \"{0}{0}\".format(v[0]) in value and \"{0}{0}{0}\".format(v[0]) not in value:\n return True\n \n return False\n\ndef test_all_part2(value):\n return test_all(value) and test_must_have_double(value)\n\nassert test_must_have_double(112233)\nassert not test_must_have_double(123444)\nassert test_must_have_double(111122)\n\nassert test_all_part2(112233)\nassert not test_all_part2(123444)\nassert test_all_part2(111122)\n",
"_____no_output_____"
],
[
"values = []\nfor v in range(value[0], value[1]+1):\n if test_all_part2(v):\n values.append(v)\n \nlen(values)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0d9340c1e3053db3e184a515100ba3581d33238 | 129,842 | ipynb | Jupyter Notebook | ML1 - Scikit Learn Methods (Complete).ipynb | oskali/SoftwareTools_MBAN21_Python | 308ef8b3bda980d92a14f946ed9e16030ea69072 | [
"MIT"
] | null | null | null | ML1 - Scikit Learn Methods (Complete).ipynb | oskali/SoftwareTools_MBAN21_Python | 308ef8b3bda980d92a14f946ed9e16030ea69072 | [
"MIT"
] | null | null | null | ML1 - Scikit Learn Methods (Complete).ipynb | oskali/SoftwareTools_MBAN21_Python | 308ef8b3bda980d92a14f946ed9e16030ea69072 | [
"MIT"
] | null | null | null | 40.639124 | 400 | 0.437794 | [
[
[
"import pandas as pd",
"_____no_output_____"
]
],
[
[
"# Classification\n\nWe'll take a tour of the methods for classification in sklearn. First let's load a toy dataset to use:",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import load_breast_cancer\nbreast = load_breast_cancer()",
"_____no_output_____"
]
],
[
[
"Let's take a look",
"_____no_output_____"
]
],
[
[
"# Convert it to a dataframe for better visuals\ndf = pd.DataFrame(breast.data)\ndf.columns = breast.feature_names\ndf",
"_____no_output_____"
]
],
[
[
"And now look at the targets",
"_____no_output_____"
]
],
[
[
"print(breast.target_names)\nbreast.target",
"['malignant' 'benign']\n"
]
],
[
[
"## Classification Trees",
"_____no_output_____"
],
[
"Using the scikit learn models is basically the same as in Julia's ScikitLearn.jl",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import DecisionTreeClassifier\ncart = DecisionTreeClassifier(max_depth=2, min_samples_leaf=140)\ncart.fit(breast.data, breast.target)",
"_____no_output_____"
]
],
[
[
"Here's a helper function to plot the trees.",
"_____no_output_____"
],
[
"# Installing Graphviz (tedious)\n\n## Windows\n\n1. Download graphviz from https://graphviz.gitlab.io/_pages/Download/Download_windows.html\n2. Install it by running the .msi file\n3. Set the pat variable:\n (a) Go to Control Panel > System and Security > System > Advanced System Settings > Environment Variables > Path > Edit\n (b) Add 'C:\\Program Files (x86)\\Graphviz2.38\\bin'\n4. Run `conda install graphviz`\n5. Run `conda install python-graphviz`\n\n## macOS and Linux\n\n1. Run `brew install graphviz` (install `brew` from https://docs.brew.sh/Installation if you don't have it)\n2. Run `conda install graphviz`\n3. Run `conda install python-graphviz`\n",
"_____no_output_____"
]
],
[
[
"import graphviz\nimport sklearn.tree\ndef visualize_tree(sktree):\n dot_data = sklearn.tree.export_graphviz(sktree, out_file=None, \n filled=True, rounded=True, \n special_characters=False,\n feature_names=df.columns)\n return graphviz.Source(dot_data)",
"_____no_output_____"
],
[
"visualize_tree(cart)",
"_____no_output_____"
]
],
[
[
"We can get the label predictions with the `.predict` method",
"_____no_output_____"
]
],
[
[
"labels = cart.predict(breast.data)\nlabels",
"_____no_output_____"
]
],
[
[
"And similarly the predicted probabilities with `.predict_proba`",
"_____no_output_____"
]
],
[
[
"probs = cart.predict_proba(breast.data)\nprobs",
"_____no_output_____"
]
],
[
[
"Just like in Julia, the probabilities are returned for each class",
"_____no_output_____"
]
],
[
[
"probs.shape",
"_____no_output_____"
]
],
[
[
"We can extract the second column of the probs by slicing, just like how we did it in Julia",
"_____no_output_____"
]
],
[
[
"probs = cart.predict_proba(breast.data)[:,1]\nprobs",
"_____no_output_____"
]
],
[
[
"To evaluate the model, we can use functions from `sklearn.metrics`",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import roc_auc_score, accuracy_score, confusion_matrix",
"_____no_output_____"
],
[
"roc_auc_score(breast.target, probs)",
"_____no_output_____"
],
[
"accuracy_score(breast.target, labels)",
"_____no_output_____"
],
[
"confusion_matrix(breast.target, labels)",
"_____no_output_____"
],
[
"from lazypredict.Supervised import LazyClassifier\nfrom sklearn.datasets import load_breast_cancer\nfrom sklearn.model_selection import train_test_split\ndata = load_breast_cancer()\nX = data.data\ny= data.target\nX_train, X_test, y_train, y_test = train_test_split(X, y,test_size=.5,random_state =123)\nclf = LazyClassifier(verbose=0,ignore_warnings=True, custom_metric=None)\nmodels,predictions = clf.fit(X_train, X_test, y_train, y_test)\nmodels",
"C:\\Users\\omars\\AppData\\Roaming\\Python\\Python37\\site-packages\\sklearn\\utils\\deprecation.py:143: FutureWarning: The sklearn.utils.testing module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.utils. Anything that cannot be imported from sklearn.utils is now part of the private API.\n warnings.warn(message, FutureWarning)\n100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:03<00:00, 8.91it/s]\n"
]
],
[
[
"## Random Forests and Boosting\n\nWe use random forests and boosting in the same way as CART",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestClassifier\nforest = RandomForestClassifier(n_estimators=100)\nforest.fit(breast.data, breast.target)",
"_____no_output_____"
],
[
"labels = forest.predict(breast.data)\nprobs = forest.predict_proba(breast.data)[:,1]\nprint(roc_auc_score(breast.target, probs))\nprint(accuracy_score(breast.target, labels))\nconfusion_matrix(breast.target, labels)",
"1.0\n1.0\n"
],
[
"from sklearn.ensemble import GradientBoostingClassifier\nboost = GradientBoostingClassifier(n_estimators=100, learning_rate=0.1)\nboost.fit(breast.data, breast.target)",
"_____no_output_____"
],
[
"labels = boost.predict(breast.data)\nprobs = boost.predict_proba(breast.data)[:,1]\nprint(roc_auc_score(breast.target, probs))\nprint(accuracy_score(breast.target, labels))\nconfusion_matrix(breast.target, labels)",
"1.0\n1.0\n"
],
[
"#!pip install xgboost\nfrom xgboost import XGBClassifier\nboost2 = XGBClassifier()\nboost2.fit(breast.data, breast.target)",
"_____no_output_____"
],
[
"labels = boost2.predict(breast.data)\nprobs = boost2.predict_proba(breast.data)[:,1]\nprint(roc_auc_score(breast.target, probs))\nprint(accuracy_score(breast.target, labels))\nconfusion_matrix(breast.target, labels)",
"1.0\n1.0\n"
]
],
[
[
"## Neural Networks",
"_____no_output_____"
]
],
[
[
"from sklearn.neural_network import MLPClassifier\nmlp = MLPClassifier(max_iter=1000)\nmlp.fit(breast.data, breast.target)",
"_____no_output_____"
],
[
"labels = mlp.predict(breast.data)\nprobs = mlp.predict_proba(breast.data)[:,1]\nprint(roc_auc_score(breast.target, probs))\nprint(accuracy_score(breast.target, labels))\nconfusion_matrix(breast.target, labels)",
"0.993235029861001\n0.9490333919156415\n"
],
[
"from keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.wrappers.scikit_learn import KerasClassifier\nfrom keras.utils import np_utils\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.model_selection import KFold\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.pipeline import Pipeline\n# load dataset\nX = breast.data\nY = breast.target\n# convert integers to dummy variables (i.e. one hot encoded)\ndummy_y = np_utils.to_categorical(Y)\n \n# define baseline model\ndef baseline_model():\n # create model\n model = Sequential()\n model.add(Dense(8, input_dim=30, activation='relu'))\n model.add(Dense(2, activation='softmax'))\n # Compile model\n model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\n return model\n \nestimator = KerasClassifier(build_fn=baseline_model, epochs=10, batch_size=16, verbose=1)\nkfold = KFold(n_splits=5, shuffle=True)\nresults = cross_val_score(estimator, X, dummy_y, cv=kfold)\nprint(\"Baseline: %.2f%% (%.2f%%)\" % (results.mean()*100, results.std()*100))",
"Epoch 1/10\n29/29 [==============================] - 0s 6ms/step - loss: 36.4088 - accuracy: 0.3516\nEpoch 2/10\n29/29 [==============================] - 0s 3ms/step - loss: 17.8082 - accuracy: 0.3758\nEpoch 3/10\n29/29 [==============================] - 0s 4ms/step - loss: 15.4570 - accuracy: 0.4352\nEpoch 4/10\n29/29 [==============================] - 0s 4ms/step - loss: 12.8511 - accuracy: 0.4769\nEpoch 5/10\n29/29 [==============================] - 0s 4ms/step - loss: 10.6894 - accuracy: 0.5319\nEpoch 6/10\n29/29 [==============================] - 0s 5ms/step - loss: 9.3079 - accuracy: 0.5670\nEpoch 7/10\n29/29 [==============================] - 0s 4ms/step - loss: 8.3241 - accuracy: 0.5824\nEpoch 8/10\n29/29 [==============================] - 0s 5ms/step - loss: 7.3370 - accuracy: 0.6264\nEpoch 9/10\n29/29 [==============================] - 0s 4ms/step - loss: 6.6161 - accuracy: 0.6154\nEpoch 10/10\n29/29 [==============================] - 0s 4ms/step - loss: 5.6886 - accuracy: 0.6374\n8/8 [==============================] - 0s 2ms/step - loss: 5.7604 - accuracy: 0.6491\nEpoch 1/10\n29/29 [==============================] - 0s 5ms/step - loss: 25.0130 - accuracy: 0.4593\nEpoch 2/10\n29/29 [==============================] - 0s 5ms/step - loss: 9.2743 - accuracy: 0.2923\nEpoch 3/10\n29/29 [==============================] - 0s 5ms/step - loss: 6.6822 - accuracy: 0.3385\nEpoch 4/10\n29/29 [==============================] - 0s 4ms/step - loss: 4.5664 - accuracy: 0.4198\nEpoch 5/10\n29/29 [==============================] - 0s 5ms/step - loss: 3.2452 - accuracy: 0.5253\nEpoch 6/10\n29/29 [==============================] - 0s 5ms/step - loss: 2.2986 - accuracy: 0.6000\nEpoch 7/10\n29/29 [==============================] - 0s 4ms/step - loss: 1.7611 - accuracy: 0.6923\nEpoch 8/10\n29/29 [==============================] - 0s 5ms/step - loss: 1.3770 - accuracy: 0.7231\nEpoch 9/10\n29/29 [==============================] - 0s 5ms/step - loss: 1.2754 - accuracy: 0.7407\nEpoch 10/10\n29/29 [==============================] - 0s 5ms/step - loss: 1.1074 - accuracy: 0.7912\n8/8 [==============================] - 0s 1ms/step - loss: 0.8304 - accuracy: 0.7544\nEpoch 1/10\n29/29 [==============================] - 0s 2ms/step - loss: 19.6493 - accuracy: 0.5560\nEpoch 2/10\n29/29 [==============================] - 0s 4ms/step - loss: 2.1954 - accuracy: 0.9033\nEpoch 3/10\n29/29 [==============================] - 0s 4ms/step - loss: 1.7177 - accuracy: 0.8593\nEpoch 4/10\n29/29 [==============================] - 0s 4ms/step - loss: 1.5284 - accuracy: 0.8901\nEpoch 5/10\n29/29 [==============================] - 0s 3ms/step - loss: 1.5605 - accuracy: 0.9033\nEpoch 6/10\n29/29 [==============================] - 0s 3ms/step - loss: 1.4026 - accuracy: 0.8901\nEpoch 7/10\n29/29 [==============================] - 0s 3ms/step - loss: 1.3803 - accuracy: 0.8923: 0s - loss: 1.4837 - accuracy: 0.89\nEpoch 8/10\n29/29 [==============================] - 0s 3ms/step - loss: 1.2817 - accuracy: 0.9011\nEpoch 9/10\n29/29 [==============================] - 0s 3ms/step - loss: 1.4488 - accuracy: 0.8835\nEpoch 10/10\n29/29 [==============================] - 0s 2ms/step - loss: 1.2524 - accuracy: 0.8923\n8/8 [==============================] - 0s 2ms/step - loss: 1.6985 - accuracy: 0.8684\nEpoch 1/10\n29/29 [==============================] - 0s 1ms/step - loss: 69.7143 - accuracy: 0.3714\nEpoch 2/10\n29/29 [==============================] - 0s 3ms/step - loss: 25.4033 - accuracy: 0.3912\nEpoch 3/10\n29/29 [==============================] - 0s 3ms/step - loss: 2.2258 - accuracy: 0.8725\nEpoch 4/10\n29/29 [==============================] - 0s 3ms/step - loss: 1.8741 - accuracy: 0.8637\nEpoch 5/10\n29/29 [==============================] - 0s 3ms/step - loss: 1.7284 - accuracy: 0.8813\nEpoch 6/10\n29/29 [==============================] - 0s 3ms/step - loss: 1.7166 - accuracy: 0.8747\nEpoch 7/10\n29/29 [==============================] - 0s 2ms/step - loss: 1.6775 - accuracy: 0.8923\nEpoch 8/10\n29/29 [==============================] - 0s 3ms/step - loss: 1.6431 - accuracy: 0.8923\nEpoch 9/10\n29/29 [==============================] - 0s 2ms/step - loss: 1.6271 - accuracy: 0.8703\nEpoch 10/10\n29/29 [==============================] - 0s 2ms/step - loss: 1.5868 - accuracy: 0.8879\n8/8 [==============================] - 0s 2ms/step - loss: 1.1058 - accuracy: 0.8684\nEpoch 1/10\n29/29 [==============================] - 0s 2ms/step - loss: 9.9503 - accuracy: 0.3838\nEpoch 2/10\n29/29 [==============================] - 0s 2ms/step - loss: 6.8765 - accuracy: 0.3816\nEpoch 3/10\n29/29 [==============================] - 0s 2ms/step - loss: 5.1510 - accuracy: 0.4561\nEpoch 4/10\n29/29 [==============================] - 0s 3ms/step - loss: 3.8095 - accuracy: 0.5592\nEpoch 5/10\n29/29 [==============================] - 0s 3ms/step - loss: 3.1159 - accuracy: 0.6623\nEpoch 6/10\n29/29 [==============================] - 0s 2ms/step - loss: 2.6108 - accuracy: 0.6820\nEpoch 7/10\n29/29 [==============================] - 0s 3ms/step - loss: 2.1609 - accuracy: 0.7259\nEpoch 8/10\n29/29 [==============================] - 0s 2ms/step - loss: 1.9964 - accuracy: 0.7456\nEpoch 9/10\n29/29 [==============================] - 0s 2ms/step - loss: 1.7422 - accuracy: 0.7763\nEpoch 10/10\n29/29 [==============================] - 0s 2ms/step - loss: 1.6857 - accuracy: 0.7500\n8/8 [==============================] - 0s 5ms/step - loss: 1.4334 - accuracy: 0.8319\nBaseline: 79.44% (8.37%)\n"
]
],
[
[
"## Logistic Regression\n\nWe can also access logistic regression from sklearn",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LogisticRegression\nlogit = LogisticRegression(solver='liblinear')\nlogit.fit(breast.data, breast.target)",
"_____no_output_____"
],
[
"labels = logit.predict(breast.data)\nprobs = logit.predict_proba(breast.data)[:,1]\nprint(roc_auc_score(breast.target, probs))\nprint(accuracy_score(breast.target, labels))\nconfusion_matrix(breast.target, labels)",
"0.9945298874266688\n0.9578207381370826\n"
]
],
[
[
"The sklearn implementation has options for regularization in logistic regression. You can choose between L1 and L2 regularization:\n\n\n\n\n\n\nNote that this regularization is adhoc and **not equivalent to robustness**. For a robust logistic regression, follow the approach from 15.680.\n\nYou control the regularization with the `penalty` and `C` hyperparameters. We can see that our model above used L2 regularization with $C=1$.",
"_____no_output_____"
],
[
"### Exercise\n\nTry out unregularized logistic regression as well as L1 regularization. Which of the three options seems best? What if you try changing $C$?",
"_____no_output_____"
]
],
[
[
"# No regularization\nlogit = LogisticRegression(C=1e10, solver='liblinear')\nlogit.fit(breast.data, breast.target)\nlabels = logit.predict(breast.data)\nprobs = logit.predict_proba(breast.data)[:,1]\nprint(roc_auc_score(breast.target, probs))\nprint(accuracy_score(breast.target, labels))\nconfusion_matrix(breast.target, labels)",
"0.996789281750436\n0.9718804920913884\n"
],
[
"# L1 regularization\nlogit = LogisticRegression(C=100, penalty='l1', solver='liblinear')\nlogit.fit(breast.data, breast.target)\nlabels = logit.predict(breast.data)\nprobs = logit.predict_proba(breast.data)[:,1]\nprint(roc_auc_score(breast.target, probs))\nprint(accuracy_score(breast.target, labels))\nconfusion_matrix(breast.target, labels)",
"0.9985201627820939\n0.9876977152899824\n"
]
],
[
[
"# Regression\n\nNow let's take a look at regression in sklearn. Again we can start by loading up a dataset.",
"_____no_output_____"
]
],
[
[
"from sklearn.datasets import load_boston\nboston = load_boston()\nprint(boston.DESCR)",
".. _boston_dataset:\n\nBoston house prices dataset\n---------------------------\n\n**Data Set Characteristics:** \n\n :Number of Instances: 506 \n\n :Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target.\n\n :Attribute Information (in order):\n - CRIM per capita crime rate by town\n - ZN proportion of residential land zoned for lots over 25,000 sq.ft.\n - INDUS proportion of non-retail business acres per town\n - CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)\n - NOX nitric oxides concentration (parts per 10 million)\n - RM average number of rooms per dwelling\n - AGE proportion of owner-occupied units built prior to 1940\n - DIS weighted distances to five Boston employment centres\n - RAD index of accessibility to radial highways\n - TAX full-value property-tax rate per $10,000\n - PTRATIO pupil-teacher ratio by town\n - B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town\n - LSTAT % lower status of the population\n - MEDV Median value of owner-occupied homes in $1000's\n\n :Missing Attribute Values: None\n\n :Creator: Harrison, D. and Rubinfeld, D.L.\n\nThis is a copy of UCI ML housing dataset.\nhttps://archive.ics.uci.edu/ml/machine-learning-databases/housing/\n\n\nThis dataset was taken from the StatLib library which is maintained at Carnegie Mellon University.\n\nThe Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic\nprices and the demand for clean air', J. Environ. Economics & Management,\nvol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, 'Regression diagnostics\n...', Wiley, 1980. N.B. Various transformations are used in the table on\npages 244-261 of the latter.\n\nThe Boston house-price data has been used in many machine learning papers that address regression\nproblems. \n \n.. topic:: References\n\n - Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261.\n - Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.\n\n"
]
],
[
[
"Take a look at the X",
"_____no_output_____"
]
],
[
[
"df = pd.DataFrame(boston.data)\ndf.columns = boston.feature_names\ndf",
"_____no_output_____"
],
[
"boston.target",
"_____no_output_____"
]
],
[
[
"## Regression Trees\n\nWe use regression trees in the same way as classification",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import DecisionTreeRegressor\ncart = DecisionTreeRegressor(max_depth=2, min_samples_leaf=5)\ncart.fit(boston.data, boston.target)\nvisualize_tree(cart)",
"_____no_output_____"
]
],
[
[
"Like for classification, we get the predicted labels out with the `.predict` method",
"_____no_output_____"
]
],
[
[
"preds = cart.predict(boston.data)\npreds",
"_____no_output_____"
]
],
[
[
"There are functions provided by `sklearn.metrics` to evaluate the predictions",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score",
"_____no_output_____"
],
[
"print(mean_absolute_error(boston.target, preds))\nprint(mean_squared_error(boston.target, preds))\nprint(r2_score(boston.target, preds))",
"3.5736909785051676\n25.69946745212606\n0.695574477973027\n"
]
],
[
[
"## Random Forests and Boosting\n\nRandom forests and boosting for regression work the same as in classification, except we use the `Regressor` version rather than `Classifier`.\n\n### Exercise\n\nTest and compare the (in-sample) performance of random forests and boosting on the Boston data with some sensible parameters.",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestRegressor\nforest = RandomForestRegressor(n_estimators=100)\nforest.fit(boston.data, boston.target)\npreds = forest.predict(boston.data)\nprint(mean_absolute_error(boston.target, preds))\nprint(mean_squared_error(boston.target, preds))\nprint(r2_score(boston.target, preds))",
"0.8015256916996039\n1.403098794466401\n0.9833794578134142\n"
],
[
"from sklearn.ensemble import GradientBoostingRegressor\nboost = GradientBoostingRegressor(n_estimators=100, learning_rate=0.2)\nboost.fit(boston.data, boston.target)\npreds = boost.predict(boston.data)\nprint(mean_absolute_error(boston.target, preds))\nprint(mean_squared_error(boston.target, preds))\nprint(r2_score(boston.target, preds))",
"0.772623394132104\n0.9809578962773967\n0.9883799685648341\n"
],
[
"from xgboost import XGBRegressor\nboost2 = XGBRegressor()\nboost2.fit(boston.data, boston.target)\npreds = boost2.predict(boston.data)\nprint(mean_absolute_error(boston.target, preds))\nprint(mean_squared_error(boston.target, preds))\nprint(r2_score(boston.target, preds))",
"0.026413507235379965\n0.0014430003436840648\n0.9999829068001611\n"
]
],
[
[
"## Neural Networks",
"_____no_output_____"
]
],
[
[
"from sklearn.neural_network import MLPRegressor\nmlp = MLPRegressor(max_iter=1000)\nmlp.fit(boston.data, boston.target)",
"_____no_output_____"
],
[
"preds = mlp.predict(boston.data)\nprint(mean_absolute_error(boston.target, preds))\nprint(mean_squared_error(boston.target, preds))\nprint(r2_score(boston.target, preds))",
"2.778925080695003\n15.118711213942651\n0.8209098471688843\n"
],
[
"from keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.wrappers.scikit_learn import KerasRegressor\nfrom keras.utils import np_utils\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.model_selection import KFold\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.pipeline import Pipeline\n# load dataset\nX = boston.data\nY = boston.target\n \n# define baseline model\ndef baseline_model():\n # create model\n model = Sequential()\n model.add(Dense(13, input_dim=X.shape[1], kernel_initializer='normal', activation='relu'))\n model.add(Dense(1, kernel_initializer='normal'))\n # Compile model\n model.compile(loss='mean_squared_error', optimizer='adam')\n return model\n\nestimator = KerasRegressor(build_fn=baseline_model, epochs=10, batch_size=16, verbose=1)\nkfold = KFold(n_splits=5, shuffle=True)\nresults = cross_val_score(estimator, X, Y, cv=kfold)\nprint(\"Mean Squared Error: %.2f (%.2f)\" % (abs(results.mean()), results.std()))",
"Epoch 1/10\n26/26 [==============================] - 0s 2ms/step - loss: 416.3250\nEpoch 2/10\n26/26 [==============================] - 0s 4ms/step - loss: 189.9820\nEpoch 3/10\n26/26 [==============================] - 0s 5ms/step - loss: 147.3391\nEpoch 4/10\n26/26 [==============================] - 0s 4ms/step - loss: 123.4213\nEpoch 5/10\n26/26 [==============================] - 0s 4ms/step - loss: 101.8594\nEpoch 6/10\n26/26 [==============================] - 0s 4ms/step - loss: 86.1179\nEpoch 7/10\n26/26 [==============================] - 0s 4ms/step - loss: 76.7525: 0s - loss: 74.\nEpoch 8/10\n26/26 [==============================] - 0s 5ms/step - loss: 71.9598\nEpoch 9/10\n26/26 [==============================] - 0s 5ms/step - loss: 68.5318\nEpoch 10/10\n26/26 [==============================] - 0s 5ms/step - loss: 66.7393\n7/7 [==============================] - 0s 4ms/step - loss: 68.3569\nEpoch 1/10\n26/26 [==============================] - 0s 4ms/step - loss: 229.8424\nEpoch 2/10\n26/26 [==============================] - 0s 5ms/step - loss: 152.8784\nEpoch 3/10\n26/26 [==============================] - 0s 5ms/step - loss: 124.9488\nEpoch 4/10\n26/26 [==============================] - 0s 5ms/step - loss: 101.3086A: 0s - loss: 103.713\nEpoch 5/10\n26/26 [==============================] - 0s 5ms/step - loss: 83.7563\nEpoch 6/10\n26/26 [==============================] - 0s 6ms/step - loss: 72.4537\nEpoch 7/10\n26/26 [==============================] - 0s 5ms/step - loss: 68.8848\nEpoch 8/10\n26/26 [==============================] - 0s 5ms/step - loss: 65.2835\nEpoch 9/10\n26/26 [==============================] - 0s 5ms/step - loss: 63.0682\nEpoch 10/10\n26/26 [==============================] - 0s 5ms/step - loss: 61.2580\n7/7 [==============================] - 0s 4ms/step - loss: 78.1588\nEpoch 1/10\n26/26 [==============================] - 0s 3ms/step - loss: 394.2870\nEpoch 2/10\n26/26 [==============================] - 0s 4ms/step - loss: 154.2006\nEpoch 3/10\n26/26 [==============================] - 0s 4ms/step - loss: 121.5101\nEpoch 4/10\n26/26 [==============================] - 0s 5ms/step - loss: 96.5747\nEpoch 5/10\n26/26 [==============================] - 0s 5ms/step - loss: 79.6176\nEpoch 6/10\n26/26 [==============================] - 0s 5ms/step - loss: 71.7972\nEpoch 7/10\n26/26 [==============================] - 0s 6ms/step - loss: 67.3771\nEpoch 8/10\n26/26 [==============================] - 0s 6ms/step - loss: 65.3503\nEpoch 9/10\n26/26 [==============================] - 0s 5ms/step - loss: 63.4878\nEpoch 10/10\n26/26 [==============================] - 0s 5ms/step - loss: 62.6430\n7/7 [==============================] - 0s 3ms/step - loss: 72.4628\nEpoch 1/10\n26/26 [==============================] - 0s 4ms/step - loss: 457.5549\nEpoch 2/10\n26/26 [==============================] - 0s 4ms/step - loss: 213.4543\nEpoch 3/10\n26/26 [==============================] - 0s 3ms/step - loss: 137.6380\nEpoch 4/10\n26/26 [==============================] - 0s 5ms/step - loss: 123.8971\nEpoch 5/10\n26/26 [==============================] - 0s 6ms/step - loss: 111.1874\nEpoch 6/10\n26/26 [==============================] - 0s 4ms/step - loss: 101.2769\nEpoch 7/10\n26/26 [==============================] - 0s 5ms/step - loss: 91.2895\nEpoch 8/10\n26/26 [==============================] - 0s 5ms/step - loss: 84.0696\nEpoch 9/10\n26/26 [==============================] - 0s 4ms/step - loss: 79.6263\nEpoch 10/10\n26/26 [==============================] - 0s 4ms/step - loss: 76.9124\n7/7 [==============================] - 0s 1ms/step - loss: 49.1916\nEpoch 1/10\n26/26 [==============================] - 0s 1ms/step - loss: 337.1278\nEpoch 2/10\n26/26 [==============================] - 0s 2ms/step - loss: 109.4944\nEpoch 3/10\n26/26 [==============================] - 0s 3ms/step - loss: 95.3181\nEpoch 4/10\n26/26 [==============================] - 0s 3ms/step - loss: 88.5550\nEpoch 5/10\n26/26 [==============================] - 0s 3ms/step - loss: 83.2130\nEpoch 6/10\n26/26 [==============================] - 0s 2ms/step - loss: 79.5414\nEpoch 7/10\n26/26 [==============================] - 0s 4ms/step - loss: 76.1562\nEpoch 8/10\n26/26 [==============================] - 0s 4ms/step - loss: 74.1669\nEpoch 9/10\n26/26 [==============================] - 0s 5ms/step - loss: 73.5261\nEpoch 10/10\n26/26 [==============================] - 0s 5ms/step - loss: 74.5165\n7/7 [==============================] - 0s 854us/step - loss: 74.1472\nMean Squared Error: 68.46 (10.14)\n"
]
],
[
[
"## Linear Regression Models",
"_____no_output_____"
],
[
"There are a large collection of linear regression models in sklearn. Let's start with a simple ordinary linear regression",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LinearRegression\nlinear = LinearRegression()\nlinear.fit(boston.data, boston.target)\npreds = linear.predict(boston.data)\nprint(mean_absolute_error(boston.target, preds))\nprint(mean_squared_error(boston.target, preds))\nprint(r2_score(boston.target, preds))",
"3.270862810900317\n21.894831181729206\n0.7406426641094094\n"
]
],
[
[
"We can also take a look at the betas:",
"_____no_output_____"
]
],
[
[
"linear.coef_",
"_____no_output_____"
]
],
[
[
"We can use regularized models as well. Here is ridge regression:",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import Ridge\nridge = Ridge(alpha=10)\nridge.fit(boston.data, boston.target)\npreds = ridge.predict(boston.data)\nprint(mean_absolute_error(boston.target, preds))\nprint(mean_squared_error(boston.target, preds))\nprint(r2_score(boston.target, preds))\nridge.coef_",
"3.315169248123664\n22.660363555639318\n0.7315744764907257\n"
]
],
[
[
"And here is lasso",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import Lasso\nlasso = Lasso(alpha=1)\nlasso.fit(boston.data, boston.target)\npreds = lasso.predict(boston.data)\nprint(mean_absolute_error(boston.target, preds))\nprint(mean_squared_error(boston.target, preds))\nprint(r2_score(boston.target, preds))\nlasso.coef_",
"3.6117102456478434\n26.79609915726647\n0.6825842212709925\n"
]
],
[
[
"There are many other linear regression models available. See the [linear model documentation](http://scikit-learn.org/stable/modules/linear_model.html) for more.",
"_____no_output_____"
],
[
"### Exercise\n\nThe elastic net is another linear regression method that combines ridge and lasso regularization. Try running it on this dataset, referring to the documentation as needed to learn how to use it and control the hyperparameters.",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import ElasticNet\nelastic = ElasticNet(alpha=1, l1_ratio=.7)\nelastic.fit(boston.data, boston.target)\npreds = elastic.predict(boston.data)\nprint(mean_absolute_error(boston.target, preds))\nprint(mean_squared_error(boston.target, preds))\nprint(r2_score(boston.target, preds))\nelastic.coef_",
"3.6003900945410816\n26.61250550876538\n0.6847589975534153\n"
],
[
"?DecisionTreeClassifier",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
d0d95e3eccce87f1971b920651070af6c00a51e5 | 1,760 | ipynb | Jupyter Notebook | 01-Lesson-Plans/19-Supervised-Machine-Learning/2/Activities/09-Ins_SVM/Unsolved/Ins_Support_Vector_Machine.ipynb | anirudhmungre/sneaky-lessons | 8e48015c50865059db96f8cd369bcc15365d66c7 | [
"ADSL"
] | null | null | null | 01-Lesson-Plans/19-Supervised-Machine-Learning/2/Activities/09-Ins_SVM/Unsolved/Ins_Support_Vector_Machine.ipynb | anirudhmungre/sneaky-lessons | 8e48015c50865059db96f8cd369bcc15365d66c7 | [
"ADSL"
] | null | null | null | 01-Lesson-Plans/19-Supervised-Machine-Learning/2/Activities/09-Ins_SVM/Unsolved/Ins_Support_Vector_Machine.ipynb | anirudhmungre/sneaky-lessons | 8e48015c50865059db96f8cd369bcc15365d66c7 | [
"ADSL"
] | null | null | null | 20.952381 | 86 | 0.538636 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\n\nfrom matplotlib import style\nstyle.use(\"ggplot\")\n# from matplotlib import rcParams\n# rcParams['figure.figsize'] = 10, 8",
"_____no_output_____"
],
[
"from sklearn.datasets import make_blobs\nX, y = make_blobs(n_samples=40, centers=2, random_state=42, cluster_std=1.25)\nplt.scatter(X[:, 0], X[:, 1], c=y, s=100, cmap=\"bwr\");\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
d0d95fe5966e22137573c224fd9cb36bdbf5ea7d | 15,807 | ipynb | Jupyter Notebook | Resources/Data-Science/Machine-Learning/Multiple-Linear-Regression/sklearn - Multiple Linear Regression_.ipynb | nate4bs/DataRepo | 916f75ff2996967c62684bc75be23dd23aa59a67 | [
"MIT"
] | null | null | null | Resources/Data-Science/Machine-Learning/Multiple-Linear-Regression/sklearn - Multiple Linear Regression_.ipynb | nate4bs/DataRepo | 916f75ff2996967c62684bc75be23dd23aa59a67 | [
"MIT"
] | 1 | 2020-06-11T23:14:24.000Z | 2020-06-11T23:14:24.000Z | Resources/Data-Science/Machine-Learning/Multiple-Linear-Regression/sklearn - Multiple Linear Regression_.ipynb | nate4bs/DataRepo | 916f75ff2996967c62684bc75be23dd23aa59a67 | [
"MIT"
] | null | null | null | 23.627803 | 214 | 0.450497 | [
[
[
"# Multiple Linear Regression with sklearn - Exercise Solution",
"_____no_output_____"
],
[
"You are given a real estate dataset. \n\nReal estate is one of those examples that every regression course goes through as it is extremely easy to understand and there is a (almost always) certain causal relationship to be found.\n\nThe data is located in the file: 'real_estate_price_size_year.csv'. \n\nYou are expected to create a multiple linear regression (similar to the one in the lecture), using the new data. \n\nApart from that, please:\n- Display the intercept and coefficient(s)\n- Find the R-squared and Adjusted R-squared\n- Compare the R-squared and the Adjusted R-squared\n- Compare the R-squared of this regression and the simple linear regression where only 'size' was used\n- Using the model make a prediction about an apartment with size 750 sq.ft. from 2009\n- Find the univariate (or multivariate if you wish - see the article) p-values of the two variables. What can you say about them?\n- Create a summary table with your findings\n\nIn this exercise, the dependent variable is 'price', while the independent variables are 'size' and 'year'.\n\nGood luck!",
"_____no_output_____"
],
[
"## Import the relevant libraries",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set()\n\nfrom sklearn.linear_model import LinearRegression",
"_____no_output_____"
]
],
[
[
"## Load the data",
"_____no_output_____"
]
],
[
[
"data = pd.read_csv('real_estate_price_size_year.csv')\ndata.head()",
"_____no_output_____"
],
[
"data.describe()",
"_____no_output_____"
]
],
[
[
"## Create the regression",
"_____no_output_____"
],
[
"### Declare the dependent and the independent variables",
"_____no_output_____"
]
],
[
[
"x = data[['size','year']]\ny = data['price']",
"_____no_output_____"
]
],
[
[
"### Regression",
"_____no_output_____"
]
],
[
[
"reg = LinearRegression()\nreg.fit(x,y)",
"_____no_output_____"
]
],
[
[
"### Find the intercept",
"_____no_output_____"
]
],
[
[
"reg.intercept_",
"_____no_output_____"
]
],
[
[
"### Find the coefficients",
"_____no_output_____"
]
],
[
[
"reg.coef_",
"_____no_output_____"
]
],
[
[
"### Calculate the R-squared",
"_____no_output_____"
]
],
[
[
"reg.score(x,y)",
"_____no_output_____"
]
],
[
[
"### Calculate the Adjusted R-squared",
"_____no_output_____"
]
],
[
[
"# Let's use the handy function we created\ndef adj_r2(x,y):\n r2 = reg.score(x,y)\n n = x.shape[0]\n p = x.shape[1]\n adjusted_r2 = 1-(1-r2)*(n-1)/(n-p-1)\n return adjusted_r2",
"_____no_output_____"
],
[
"adj_r2(x,y)",
"_____no_output_____"
]
],
[
[
"### Compare the R-squared and the Adjusted R-squared",
"_____no_output_____"
],
[
"It seems the the R-squared is only slightly larger than the Adjusted R-squared, implying that we were not penalized a lot for the inclusion of 2 independent variables. ",
"_____no_output_____"
],
[
"### Compare the Adjusted R-squared with the R-squared of the simple linear regression",
"_____no_output_____"
],
[
"Comparing the Adjusted R-squared with the R-squared of the simple linear regression (when only 'size' was used - a couple of lectures ago), we realize that 'Year' is not bringing too much value to the result.",
"_____no_output_____"
],
[
"### Making predictions\n\nFind the predicted price of an apartment that has a size of 750 sq.ft. from 2009.",
"_____no_output_____"
]
],
[
[
"reg.predict([[750,2009]])",
"_____no_output_____"
]
],
[
[
"### Calculate the univariate p-values of the variables",
"_____no_output_____"
]
],
[
[
"from sklearn.feature_selection import f_regression",
"_____no_output_____"
],
[
"f_regression(x,y)",
"_____no_output_____"
],
[
"p_values = f_regression(x,y)[1]\np_values",
"_____no_output_____"
],
[
"p_values.round(3)",
"_____no_output_____"
]
],
[
[
"### Create a summary table with your findings",
"_____no_output_____"
]
],
[
[
"reg_summary = pd.DataFrame(data = x.columns.values, columns=['Features'])\nreg_summary ['Coefficients'] = reg.coef_\nreg_summary ['p-values'] = p_values.round(3)\nreg_summary",
"_____no_output_____"
]
],
[
[
"It seems that 'Year' is not event significant, therefore we should remove it from the model.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0d9675cc6fc16ab2b39c4fce1497608d4c9972b | 165,736 | ipynb | Jupyter Notebook | 1-Lessons/Lesson08/OriginalPowerpoint/ENGR-1330-Lesson8-Dev.ipynb | dustykat/engr-1330-psuedo-course | 3e7e31a32a1896fcb1fd82b573daa5248e465a36 | [
"CC0-1.0"
] | null | null | null | 1-Lessons/Lesson08/OriginalPowerpoint/ENGR-1330-Lesson8-Dev.ipynb | dustykat/engr-1330-psuedo-course | 3e7e31a32a1896fcb1fd82b573daa5248e465a36 | [
"CC0-1.0"
] | null | null | null | 1-Lessons/Lesson08/OriginalPowerpoint/ENGR-1330-Lesson8-Dev.ipynb | dustykat/engr-1330-psuedo-course | 3e7e31a32a1896fcb1fd82b573daa5248e465a36 | [
"CC0-1.0"
] | null | null | null | 39.89793 | 54,112 | 0.581521 | [
[
[
"# ENGR 1330 Computational Thinking with Data Science \nLast GitHub Commit Date: 14 February 2021\n\n## Lesson 8 The Pandas module \n- About Pandas\n- How to install\n - Anaconda\n - JupyterHub/Lab (on Linux)\n - JupyterHub/Lab (on MacOS)\n - JupyterHub/Lab (on Windoze)\n- The Dataframe\n - Primatives\n - Using Pandas\n - Create, Modify, Delete datagrames\n - Slice Dataframes\n - Conditional Selection\n - Synthetic Programming (Symbolic Function Application)\n - Files\n- Access Files from a remote Web Server\n - Get file contents\n - Get the actual file\n - Adaptations for encrypted servers (future semester)\n---\n### Special Script Blocks",
"_____no_output_____"
]
],
[
[
"%%html\n<!--Script block to left align Markdown Tables-->\n<style>\n table {margin-left: 0 !important;}\n</style>",
"_____no_output_____"
]
],
[
[
"---\n## Objectives\n1. To understand the **dataframe abstraction** as implemented in the Pandas library(module).\n 1. To be able to access and manipulate data within a dataframe\n 2. To be able to obtain basic statistical measures of data within a dataframe\n2. Read/Write from/to files\n 1. MS Excel-type files (.xls,.xlsx,.csv) (LibreOffice files use the MS .xml standard)\n 2. Ordinary ASCII (.txt) files \n3. Access files directly from a URL (advanced concept)\n 1. Using a wget-type function\n 2. Using a curl-type function\n 3. Using API keys (future versions)\n",
"_____no_output_____"
],
[
"### Pandas: \nPandas is the core library for dataframe manipulation in Python. It provides a high-performance multidimensional array object, and tools for working with these arrays. The library’s name is derived from the term ‘Panel Data’. \nIf you are curious about Pandas, this cheat sheet is recommended: [https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf](https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf)\n\n#### Data Structure \nThe Primary data structure is called a dataframe. It is an **abstraction** where data are represented as a 2-dimensional mutable and heterogenous tabular data structure; much like a Worksheet in MS Excel. The structure itself is popular among statisticians and data scientists and business executives. \n\nAccording to the marketing department \n*\"Pandas Provides rich data structures and functions designed to make working with data fast, easy, and expressive. It is useful in data manipulation, cleaning, and analysis; Pandas excels in performance and productivity \"*",
"_____no_output_____"
],
[
"---\n\n# The Dataframe\n\nA data table is called a `DataFrame` in pandas (and other programming environments too). The figure below from [https://pandas.pydata.org/docs/getting_started/index.html](https://pandas.pydata.org/docs/getting_started/index.html) illustrates a dataframe model:\n\n \n\nEach **column** and each **row** in a dataframe is called a series, the header row, and index column are special. \nLike MS Excel we can query the dataframe to find the contents of a particular `cell` using its **row name** and **column name**, or operate on entire **rows** and **columns**\n\nTo use pandas, we need to import the module.",
"_____no_output_____"
],
[
"## Computational Thinking Concepts\n\nThe CT concepts expressed within Pandas include:\n\n- `Decomposition` : Data interpretation, manipulation, and analysis of Pandas dataframes is an act of decomposition -- although the dataframes can be quite complex.\n- `Abstraction` : The dataframe is a data representation abstraction that allows for placeholder operations, later substituted with specific contents for a problem; enhances reuse and readability. We leverage the principle of algebraic replacement using these abstractions.\n- `Algorithms` : Data interpretation, manipulation, and analysis of dataframes are generally implemented as part of a supervisory algorithm.",
"_____no_output_____"
],
[
"## Module Set-Up\n\nIn principle, Pandas should be available in a default Anaconda install \n- You should not have to do any extra installation steps to install the library in Python\n- You do have to **import** the library in your scripts\n\nHow to check\n- Simply open a code cell and run `import pandas` if the notebook does not protest (i.e. pink block of error), the youis good to go.",
"_____no_output_____"
]
],
[
[
"import pandas",
"_____no_output_____"
]
],
[
[
"If you do get an error, that means that you will have to install using `conda` or `pip`; you are on-your-own here! On the **content server** the process is:\n\n1. Open a new terminal from the launcher\n2. Change to root user `su` then enter the root password\n3. `sudo -H /opt/jupyterhib/bin/python3 -m pip install pandas`\n4. Wait until the install is complete; for security, user `compthink` is not in the `sudo` group\n5. Verify the install by trying to execute `import pandas` as above.\n\nThe process above will be similar on a Macintosh, or Windows if you did not use an Anaconda distribution. Best is to have a sucessful anaconda install, or go to the [GoodJobUntilMyOrgansGetHarvested](https://apply.mysubwaycareer.com/us/en/). \n\nIf you have to do this kind of install, you will have to do some reading, some references I find useful are:\n1. https://jupyterlab.readthedocs.io/en/stable/user/extensions.html\n2. https://www.pugetsystems.com/labs/hpc/Note-How-To-Install-JupyterHub-on-a-Local-Server-1673/#InstallJupyterHub\n3. https://jupyterhub.readthedocs.io/en/stable/installation-guide-hard.html (This is the approach on the content server which has a functioning JupyterHub)",
"_____no_output_____"
],
[
"### Dataframe-type Structure using primative python\n\nFirst lets construct a dataframe like object using python primatives.\nWe will construct 3 lists, one for row names, one for column names, and one for the content.",
"_____no_output_____"
]
],
[
[
"import numpy\nmytabular = numpy.random.randint(1,100,(5,4))\nmyrowname = ['A','B','C','D','E']\nmycolname = ['W','X','Y','Z']\nmytable = [['' for jcol in range(len(mycolname)+1)] for irow in range(len(myrowname)+1)] #non-null destination matrix, note the implied loop construction",
"_____no_output_____"
]
],
[
[
"The above builds a placeholder named `mytable` for the psuedo-dataframe.\nNext we populate the table, using a for loop to write the column names in the first row, row names in the first column, and the table fill for the rest of the table.",
"_____no_output_____"
]
],
[
[
"for irow in range(1,len(myrowname)+1): # write the row names\n mytable[irow][0]=myrowname[irow-1]\nfor jcol in range(1,len(mycolname)+1): # write the column names\n mytable[0][jcol]=mycolname[jcol-1] \nfor irow in range(1,len(myrowname)+1): # fill the table (note the nested loop)\n for jcol in range(1,len(mycolname)+1):\n mytable[irow][jcol]=mytabular[irow-1][jcol-1]",
"_____no_output_____"
]
],
[
[
"Now lets print the table out by row and we see we have a very dataframe-like structure",
"_____no_output_____"
]
],
[
[
"for irow in range(0,len(myrowname)+1):\n print(mytable[irow][0:len(mycolname)+1])",
"['', 'W', 'X', 'Y', 'Z']\n['A', 25, 43, 34, 7]\n['B', 49, 28, 24, 53]\n['C', 66, 21, 16, 33]\n['D', 71, 6, 21, 63]\n['E', 13, 94, 13, 17]\n"
]
],
[
[
"We can also query by row ",
"_____no_output_____"
]
],
[
[
"print(mytable[3][0:len(mycolname)+1])",
"['C', 66, 21, 16, 33]\n"
]
],
[
[
"Or by column",
"_____no_output_____"
]
],
[
[
"for irow in range(0,len(myrowname)+1): #cannot use implied loop in a column slice\n print(mytable[irow][2])",
"X\n43\n28\n21\n6\n94\n"
]
],
[
[
"Or by row+column index; sort of looks like a spreadsheet syntax.",
"_____no_output_____"
]
],
[
[
"print(' ',mytable[0][3])\nprint(mytable[3][0],mytable[3][3])",
" Y\nC 16\n"
]
],
[
[
"# Now we shall create a proper dataframe\nWe will now do the same using pandas",
"_____no_output_____"
]
],
[
[
"mydf = pandas.DataFrame(numpy.random.randint(1,100,(5,4)), ['A','B','C','D','E'], ['W','X','Y','Z'])\nmydf",
"_____no_output_____"
]
],
[
[
"We can also turn our table into a dataframe, notice how the constructor adds header row and index column",
"_____no_output_____"
]
],
[
[
"mydf1 = pandas.DataFrame(mytable)\nmydf1",
"_____no_output_____"
]
],
[
[
"To get proper behavior, we can just reuse our original objects",
"_____no_output_____"
]
],
[
[
"mydf2 = pandas.DataFrame(mytabular,myrowname,mycolname)\nmydf2",
"_____no_output_____"
]
],
[
[
"Why are `mydf` and `mydf2` different?",
"_____no_output_____"
],
[
"### Getting the shape of dataframes\n\nThe shape method, which is available after the dataframe is constructed, will return the row and column rank (count) of a dataframe.",
"_____no_output_____"
]
],
[
[
"mydf.shape",
"_____no_output_____"
],
[
"mydf1.shape",
"_____no_output_____"
],
[
"mydf2.shape",
"_____no_output_____"
]
],
[
[
"### Appending new columns\nTo append a column simply assign a value to a new column name to the dataframe",
"_____no_output_____"
]
],
[
[
"mydf['new']= 'NA'",
"_____no_output_____"
],
[
"mydf",
"_____no_output_____"
]
],
[
[
"## Appending new rows\nThis is sometimes a bit trickier but here is one way:\n- create a copy of a row, give it a new name. \n- concatenate it back into the dataframe.",
"_____no_output_____"
]
],
[
[
"newrow = mydf.loc[['E']].rename(index={\"E\": \"X\"}) # create a single row, rename the index\nnewtable = pandas.concat([mydf,newrow]) # concatenate the row to bottom of df - note the syntax",
"_____no_output_____"
],
[
"newtable",
"_____no_output_____"
]
],
[
[
"### Removing Rows and Columns\n\nTo remove a column is straightforward, we use the drop method",
"_____no_output_____"
]
],
[
[
"newtable.drop('new', axis=1, inplace = True)\nnewtable",
"_____no_output_____"
]
],
[
[
"To remove a row, you really got to want to, easiest is probablty to create a new dataframe with the row removed",
"_____no_output_____"
]
],
[
[
"newtable = newtable.loc[['A','B','D','E','X']] # select all rows except C\nnewtable",
"_____no_output_____"
],
[
"# or just use drop with axis specify\nnewtable.drop('X', axis=0, inplace = True)",
"_____no_output_____"
],
[
"newtable",
"_____no_output_____"
]
],
[
[
"# Indexing\nWe have already been indexing, but a few examples follow:",
"_____no_output_____"
]
],
[
[
"newtable['X'] #Selecing a single column",
"_____no_output_____"
],
[
"newtable[['X','W']] #Selecing a multiple columns",
"_____no_output_____"
],
[
"newtable.loc['E'] #Selecing rows based on label via loc[ ] indexer",
"_____no_output_____"
],
[
"newtable\n ",
"_____no_output_____"
],
[
"newtable.loc[['E','D','B']] #Selecing multiple rows based on label via loc[ ] indexer",
"_____no_output_____"
],
[
"newtable.loc[['B','E','D'],['X','Y']] #Selecting elements via both rows and columns via loc[ ] indexer",
"_____no_output_____"
]
],
[
[
"# Conditional Selection",
"_____no_output_____"
]
],
[
[
"mydf = pandas.DataFrame({'col1':[1,2,3,4,5,6,7,8],\n 'col2':[444,555,666,444,666,111,222,222],\n 'col3':['orange','apple','grape','mango','jackfruit','watermelon','banana','peach']})\nmydf",
"_____no_output_____"
],
[
"#What fruit corresponds to the number 555 in ‘col2’?\n\nmydf[mydf['col2']==555]['col3']",
"_____no_output_____"
],
[
"#What fruit corresponds to the minimum number in ‘col2’?\n\nmydf[mydf['col2']==mydf['col2'].min()]['col3']",
"_____no_output_____"
]
],
[
[
"# Descriptor Functions",
"_____no_output_____"
]
],
[
[
"#Creating a dataframe from a dictionary\n\nmydf = pandas.DataFrame({'col1':[1,2,3,4,5,6,7,8],\n 'col2':[444,555,666,444,666,111,222,222],\n 'col3':['orange','apple','grape','mango','jackfruit','watermelon','banana','peach']})\nmydf",
"_____no_output_____"
]
],
[
[
"### `head` method\n\nReturns the first few rows, useful to infer structure",
"_____no_output_____"
]
],
[
[
"#Returns only the first five rows\n\nmydf.head()",
"_____no_output_____"
]
],
[
[
"### `info` method\n\nReturns the data model (data column count, names, data types)",
"_____no_output_____"
]
],
[
[
"#Info about the dataframe\n\nmydf.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 8 entries, 0 to 7\nData columns (total 3 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 col1 8 non-null int64 \n 1 col2 8 non-null int64 \n 2 col3 8 non-null object\ndtypes: int64(2), object(1)\nmemory usage: 320.0+ bytes\n"
]
],
[
[
"### `describe` method\n\nReturns summary statistics of each numeric column. \nAlso returns the minimum and maximum value in each column, and the IQR (Interquartile Range). \nAgain useful to understand structure of the columns.",
"_____no_output_____"
]
],
[
[
"#Statistics of the dataframe\n\nmydf.describe()",
"_____no_output_____"
]
],
[
[
"### Counting and Sum methods\n\nThere are also methods for counts and sums by specific columns",
"_____no_output_____"
]
],
[
[
"mydf['col2'].sum() #Sum of a specified column",
"_____no_output_____"
]
],
[
[
"The `unique` method returns a list of unique values (filters out duplicates in the list, underlying dataframe is preserved)",
"_____no_output_____"
]
],
[
[
"mydf['col2'].unique() #Returns the list of unique values along the indexed column ",
"_____no_output_____"
]
],
[
[
"The `nunique` method returns a count of unique values",
"_____no_output_____"
]
],
[
[
"mydf['col2'].nunique() #Returns the total number of unique values along the indexed column ",
"_____no_output_____"
]
],
[
[
"The `value_counts()` method returns the count of each unique value (kind of like a histogram, but each value is the bin)",
"_____no_output_____"
]
],
[
[
"mydf['col2'].value_counts() #Returns the number of occurences of each unique value",
"_____no_output_____"
]
],
[
[
"## Using functions in dataframes - symbolic apply\n\nThe power of **Pandas** is an ability to apply a function to each element of a dataframe series (or a whole frame) by a technique called symbolic (or synthetic programming) application of the function.\n\nThis employs principles of **pattern matching**, **abstraction**, and **algorithm development**; a holy trinity of Computational Thinning.\n\nIt's somewhat complicated but quite handy, best shown by an example:",
"_____no_output_____"
]
],
[
[
"def times2(x): # A prototype function to scalar multiply an object x by 2\n return(x*2)\n\nprint(mydf)\nprint('Apply the times2 function to col2')\nmydf['reallynew'] = mydf['col2'].apply(times2) #Symbolic apply the function to each element of column col2, result is another dataframe",
" col1 col2 col3\n0 1 444 orange\n1 2 555 apple\n2 3 666 grape\n3 4 444 mango\n4 5 666 jackfruit\n5 6 111 watermelon\n6 7 222 banana\n7 8 222 peach\nApply the times2 function to col2\n"
],
[
"mydf",
"_____no_output_____"
]
],
[
[
"## Sorts ",
"_____no_output_____"
]
],
[
[
"mydf.sort_values('col2', ascending = True) #Sorting based on columns ",
"_____no_output_____"
],
[
"mydf.sort_values('col3', ascending = True) #Lexiographic sort",
"_____no_output_____"
]
],
[
[
"# Aggregating (Grouping Values) dataframe contents\n",
"_____no_output_____"
]
],
[
[
"#Creating a dataframe from a dictionary\n\ndata = {\n 'key' : ['A', 'B', 'C', 'A', 'B', 'C'],\n 'data1' : [1, 2, 3, 4, 5, 6],\n 'data2' : [10, 11, 12, 13, 14, 15],\n 'data3' : [20, 21, 22, 13, 24, 25]\n}\n\nmydf1 = pandas.DataFrame(data)\nmydf1",
"_____no_output_____"
],
[
"# Grouping and summing values in all the columns based on the column 'key'\n\nmydf1.groupby('key').sum()",
"_____no_output_____"
],
[
"# Grouping and summing values in the selected columns based on the column 'key'\n\nmydf1.groupby('key')[['data1', 'data2']].sum()",
"_____no_output_____"
]
],
[
[
"# Filtering out missing values\n\nFiltering and *cleaning* are often used to describe the process where data that does not support a narrative is removed ;typically for maintenance of profit applications, if the data are actually missing that is common situation where cleaning is justified.",
"_____no_output_____"
]
],
[
[
"#Creating a dataframe from a dictionary\n\ndf = pandas.DataFrame({'col1':[1,2,3,4,None,6,7,None],\n 'col2':[444,555,None,444,666,111,None,222],\n 'col3':['orange','apple','grape','mango','jackfruit','watermelon','banana','peach']})\ndf",
"_____no_output_____"
]
],
[
[
"Below we drop any row that contains a `NaN` code.",
"_____no_output_____"
]
],
[
[
"df_dropped = df.dropna()\ndf_dropped",
"_____no_output_____"
]
],
[
[
"Below we replace `NaN` codes with some value, in this case 0",
"_____no_output_____"
]
],
[
[
"df_filled1 = df.fillna(0)\ndf_filled1",
"_____no_output_____"
]
],
[
[
"Below we replace `NaN` codes with some value, in this case the mean value of of the column in which the missing value code resides.",
"_____no_output_____"
]
],
[
[
"df_filled2 = df.fillna(df.mean())\ndf_filled2",
"_____no_output_____"
]
],
[
[
"---\n## Reading a File into a Dataframe\n\nPandas has methods to read common file types, such as `csv`,`xlsx`, and `json`. \nOrdinary text files are also quite manageable.\n\nOn a machine you control you can write script to retrieve files from the internet and process them.\n",
"_____no_output_____"
]
],
[
[
"readfilecsv = pandas.read_csv('CSV_ReadingFile.csv') #Reading a .csv file\nprint(readfilecsv)",
" a b c d\n0 0 1 2 3\n1 4 5 6 7\n2 8 9 10 11\n3 12 13 14 15\n"
]
],
[
[
"Similar to reading and writing .csv files, you can also read and write .xslx files as below (useful to know this)",
"_____no_output_____"
]
],
[
[
"readfileexcel = pandas.read_excel('Excel_ReadingFile.xlsx', sheet_name='Sheet1', engine='openpyxl') #Reading a .xlsx file\nprint(readfileexcel)",
" Unnamed: 0 a b c d\n0 0 0 1 2 3\n1 1 4 5 6 7\n2 2 8 9 10 11\n3 3 12 13 14 15\n"
]
],
[
[
"# Writing a dataframe to file",
"_____no_output_____"
]
],
[
[
"#Creating and writing to a .csv file\nreadfilecsv = pandas.read_csv('CSV_ReadingFile.csv')\nreadfilecsv.to_csv('CSV_WritingFile1.csv')\nreadfilecsv = pandas.read_csv('CSV_WritingFile1.csv')\nprint(readfilecsv)",
" Unnamed: 0 a b c d\n0 0 0 1 2 3\n1 1 4 5 6 7\n2 2 8 9 10 11\n3 3 12 13 14 15\n"
],
[
"#Creating and writing to a .csv file by excluding row labels \nreadfilecsv = pandas.read_csv('CSV_ReadingFile.csv')\nreadfilecsv.to_csv('CSV_WritingFile2.csv', index = False)\nreadfilecsv = pandas.read_csv('CSV_WritingFile2.csv')\nprint(readfilecsv)",
" a b c d\n0 0 1 2 3\n1 4 5 6 7\n2 8 9 10 11\n3 12 13 14 15\n"
],
[
"#Creating and writing to a .xlsx file\nreadfileexcel = pandas.read_excel('Excel_ReadingFile.xlsx', sheet_name='Sheet1')\nreadfileexcel.to_excel('Excel_WritingFile.xlsx', sheet_name='MySheet', index = False)\nreadfileexcel = pandas.read_excel('Excel_WritingFile.xlsx', sheet_name='MySheet')\nprint(readfileexcel)",
" Unnamed: 0 a b c d\n0 0 0 1 2 3\n1 1 4 5 6 7\n2 2 8 9 10 11\n3 3 12 13 14 15\n"
]
],
[
[
"---\n\n## Downloading files from websites (optional)\n\nThis section shows how to get files from a remote computer. There are several ways to get the files, most importantly you need the FQDN to the file.",
"_____no_output_____"
],
[
"### Method: Get the actual file from a remote web server (unencrypted)\n\n> - You know the FQDN to the file it will be in structure of \"http://server-name/.../filename.ext\"\n> - The server is running ordinary (unencrypted) web services, i.e. `http://...`\n\nWe will need a module to interface with the remote server. Here we will use ``requests`` , so first we load the module\n\n> You may need to install the module into your anaconda environment using the anaconda power shell, on my computer the commands are:\n> - sudo -H /opt/jupyterhub/bin/python3 -m pip install requests \n>\n> Or:\n> - sudo -H /opt/conda/envs/python/bin/python -m pip install requests\n>\n> You will have to do some reading, but with any luck something similar will work for you. ",
"_____no_output_____"
]
],
[
[
"import requests # Module to process http/https requests",
"_____no_output_____"
]
],
[
[
"Now we will generate a ``GET`` request to the remote http server. I chose to do so using a variable to store the remote URL so I can reuse code in future projects. The ``GET`` request (an http/https method) is generated with the requests method ``get`` and assigned to an object named ``rget`` -- the name is arbitrary. Next we extract the file from the ``rget`` object and write it to a local file with the name of the remote file - esentially automating the download process. Then we import the ``pandas`` module.",
"_____no_output_____"
]
],
[
[
"remote_url=\"http://54.243.252.9/engr-1330-webroot/MyJupyterNotebooks/42-DataScience-EvaporationAnalysis/all_quads_gross_evaporation.csv\" # set the url\nrget = requests.get(remote_url, allow_redirects=True) # get the remote resource, follow imbedded links\nopen('all_quads_gross_evaporation.csv','wb').write(rget.content) # extract from the remote the contents, assign to a local file same name\nimport pandas as pd # Module to process dataframes (not absolutely needed but somewhat easier than using primatives, and gives graphing tools)",
"_____no_output_____"
],
[
"# verify file exists\n! pwd\n! ls -la",
"/home/sensei/engr-1330-webroot/1-Lessons/Lesson08/OriginalPowerpoint\ntotal 1080\ndrwxr-xr-x 4 sensei sensei 4096 Sep 10 21:41 .\ndrwxr-xr-x 5 sensei sensei 4096 Feb 15 2021 ..\ndrwxr-xr-x 2 sensei sensei 4096 Sep 10 21:11 .ipynb_checkpoints\n-rw-rw-r-- 1 sensei sensei 21150 Feb 16 2021 01-table-dataframe.png\n-rw-rw-r-- 1 sensei sensei 51 Feb 16 2021 CSV_ReadingFile.csv\n-rw-rw-r-- 1 sensei sensei 55 Feb 16 2021 CSV_WritingFile1.csv\n-rw-rw-r-- 1 sensei sensei 46 Feb 16 2021 CSV_WritingFile2.csv\n-rw-rw-r-- 1 sensei sensei 693687 Feb 16 2021 ENGR-1330-Lesson8-Dev.html\n-rw-rw-r-- 1 sensei sensei 165750 Sep 10 21:41 ENGR-1330-Lesson8-Dev.ipynb\n-rw-rw-r-- 1 sensei sensei 166998 Mar 31 21:29 ENGR-1330-Lesson8-Dev.ipynb.lcl\n-rw-rw-r-- 1 sensei sensei 5508 Feb 16 2021 Excel_ReadingFile.xlsx\n-rw-rw-r-- 1 sensei sensei 5043 Feb 18 2021 Excel_WritingFile.xlsx\n-rw-rw-r-- 1 sensei sensei 274 Sep 10 21:41 all_quads_gross_evaporation.csv\ndrwxrwxr-x 3 sensei sensei 4096 Feb 16 2021 src.old\n"
]
],
[
[
"Now we can read the file contents and check its structure, before proceeding.",
"_____no_output_____"
]
],
[
[
"#evapdf = pd.read_csv(\"all_quads_gross_evaporation.csv\",parse_dates=[\"YYYY-MM\"]) # Read the file as a .CSV assign to a dataframe evapdf\nevapdf = pd.read_csv(\"all_quads_gross_evaporation.csv\")\nevapdf.head() # check structure",
"_____no_output_____"
]
],
[
[
"Structure looks like a spreadsheet as expected; lets plot the time series for cell '911'",
"_____no_output_____"
]
],
[
[
"evapdf.plot.line(x='YYYY-MM',y='911') # Plot quadrant 911 evaporation time series ",
"_____no_output_____"
]
],
[
[
"### Method 3: Get the actual file from an encrypted server\n\nThis section is saved for future semesters",
"_____no_output_____"
],
[
"### Method 1: Get data from a file on a remote server (unencrypted)\nThis section shows how to obtain data files from public URLs. \n\nPrerequesites:\n\n- You know the FQDN to the file it will be in structure of \"http://server-name/.../filename.ext\"\n- The server is running ordinary (unencrypted) web services, i.e. `http://...`\n\n#### Web Developer Notes\nIf you want to distribute files (web developers) the files need to be in the server webroot, but can be deep into the heirarchial structure.\n\nHere we will do an example with a file that contains topographic data in XYZ format, without header information.\n\nThe first few lines of the remote file look like:\n\n 74.90959724\t93.21251922\t0\n 75.17907367\t64.40278759\t0\n 94.9935575\t93.07951286\t0\n 95.26234119\t64.60091165\t0\n 54.04976655\t64.21159095\t0\n 54.52914363\t35.06934342\t0\n 75.44993558\t34.93079513\t0\n 75.09317373\t5.462959114\t0\n 74.87357468\t10.43130083\t0\n 74.86249082\t15.72938748\t0\n\nAnd importantly it is tab delimited.\n\nThe module to manipulate url in python is called ``urllib``\n\nGoogle search to learn more, here we are using only a small component without exception trapping.\n ",
"_____no_output_____"
]
],
[
[
"#Step 1: import needed modules to interact with the internet\nfrom urllib.request import urlopen # import a method that will connect to a url and read file contents\nimport pandas #import pandas",
"_____no_output_____"
]
],
[
[
"This next code fragment sets a string called ``remote_url``; it is just a variable, name can be anything that honors python naming rules.\nThen the ``urllib`` function ``urlopen`` with read and decode methods is employed, the result is stored in an object named ``elevationXYZ``",
"_____no_output_____"
]
],
[
[
"#Step 2: make the connection to the remote file (actually its implementing \"bash curl -O http://fqdn/path ...\")\nremote_url = 'http://www.rtfmps.com/share_files/pip-corner-sumps.txt' # \nelevationXYZ = urlopen(remote_url).read().decode().split() # Gets the file contents as a single vector, comma delimited, file is not retained locally",
"_____no_output_____"
]
],
[
[
"At this point the object exists as a single vector with hundreds of elements. We now need to structure the content. Here using python primatives, and knowing how the data are supposed to look, we prepare variables to recieve the structured results",
"_____no_output_____"
]
],
[
[
"#Step 3 Python primatives to structure the data, or use fancy modules (probably easy in numpy)\nhowmany = len(elevationXYZ) # how long is the vector?\nnrow = int(howmany/3)\nxyz = [[0 for j in range(3)] for j in range(nrow)] # null space to receive data define columnX",
"_____no_output_____"
]
],
[
[
"Now that everything is ready, we can extract from the object the values we want into ``xyz``",
"_____no_output_____"
]
],
[
[
"#Step4 Now will build xyz as a matrix with 3 columns\nindex = 0\nfor irow in range(0,nrow):\n xyz[irow][0]=float(elevationXYZ[index])\n xyz[irow][1]=float(elevationXYZ[index+1])\n xyz[irow][2]=float(elevationXYZ[index+2])\n index += 3 #increment the index",
"_____no_output_____"
]
],
[
[
"``xyz`` is now a 3-column float array and can now probably be treated as a data frame.\nHere we use a ``pandas`` method to build the dataframe.",
"_____no_output_____"
]
],
[
[
"df = pandas.DataFrame(xyz)",
"_____no_output_____"
]
],
[
[
"Get some info, yep three columns (ordered triples to be precise!)",
"_____no_output_____"
]
],
[
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 774 entries, 0 to 773\nData columns (total 3 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 0 774 non-null float64\n 1 1 774 non-null float64\n 2 2 774 non-null float64\ndtypes: float64(3)\nmemory usage: 18.3 KB\n"
]
],
[
[
"And some summary statistics (meaningless for these data), but now have taken data from the internet and prepared it for analysis.",
"_____no_output_____"
]
],
[
[
"df.describe()",
"_____no_output_____"
]
],
[
[
"And lets look at the first few rows",
"_____no_output_____"
]
],
[
[
"df.head()",
"_____no_output_____"
]
],
[
[
"---\n\n## References\nOverland, B. (2018). Python Without Fear. Addison-Wesley \nISBN 978-0-13-468747-6. \n\nGrus, Joel (2015). Data Science from Scratch: First Principles with Python O’Reilly\nMedia. Kindle Edition.\n\nPrecord, C. (2010) wxPython 2.8 Application Development Cookbook Packt Publishing Ltd. Birmingham , B27 6PA, UK \nISBN 978-1-849511-78-0.",
"_____no_output_____"
]
],
[
[
"# Preamble script block to identify host, user, and kernel\nimport sys\n! hostname\n! whoami\nprint(sys.executable)\nprint(sys.version)\nprint(sys.version_info)",
"atomickitty.aws\nengr1330content\n/opt/conda/envs/python/bin/python\n3.8.3 (default, Jul 2 2020, 16:21:59) \n[GCC 7.3.0]\nsys.version_info(major=3, minor=8, micro=3, releaselevel='final', serial=0)\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0d96806debe71973caf98c2ce063263c11b5742 | 16,389 | ipynb | Jupyter Notebook | NeuralNetwork/Gradient_Descent.ipynb | billgavin/udacity-python | 843b984ed44d80e2bbfd0d295afa6321cf93773b | [
"MIT"
] | null | null | null | NeuralNetwork/Gradient_Descent.ipynb | billgavin/udacity-python | 843b984ed44d80e2bbfd0d295afa6321cf93773b | [
"MIT"
] | null | null | null | NeuralNetwork/Gradient_Descent.ipynb | billgavin/udacity-python | 843b984ed44d80e2bbfd0d295afa6321cf93773b | [
"MIT"
] | null | null | null | 27.498322 | 125 | 0.378974 | [
[
[
"import numpy as np\n\n#定义sigmoid激活函数\ndef sigmoid(x):\n return 1 / (1 + np.exp(-x))\n\n#激活函数的导数\ndef sigmoid_prime(x):\n return sigmoid(x) * (1 - sigmoid(x))\n\n#输入数据\nx = np.array([0.1, 0.3])\n\n#目标\ny = 0.2\n\n#权重\nweights = np.array([-0.8, 0.5])\n\n#更新权重的学习率\nlearnrate = 0.5\n\n#输入和权重的线性组合\nh = np.dot(x, weights)\n\n#神经网络输出\nnn_output = sigmoid(h)\nprint('Output: ', nn_output)\n\n#误差输出\nerror = y - nn_output\nprint('\\nError: ', error)\n\n#梯度输出\noutput_grad = sigmoid_prime(h)\n\nerror_term = error * output_grad\n\n#梯度下降一步\ndel_w = learnrate * error_term * x\n\nweights_new = weights + del_w\n\nprint(weights_new)",
"Output: 0.517492857666\n\nError: -0.317492857666\n[-0.8039638 0.48810859]\n"
],
[
"import pandas as pd\n\nadmissions = pd.read_csv('binary.csv')\n\ndata = pd.concat([admissions, pd.get_dummies(admissions['rank'], prefix = 'rank')], axis = 1).drop('rank', axis = 1)\ndata.head()",
"_____no_output_____"
],
[
"for field in ['gre', 'gpa']:\n mean, std = data[field].mean(), data[field].std()\n data.loc[:, field] = (data[field] - mean) / std\ndata.head()",
"_____no_output_____"
],
[
"np.random.seed(42)\nsample = np.random.choice(data.index, size = int(len(data) * 0.9), replace = False)\ndata, test_data = data.loc[sample], data.drop(sample)\nprint(data.head())\nprint('\\n')\nprint(test_data.head())",
" admit gre gpa rank_1 rank_2 rank_3 rank_4\n320 0 -1.105469 -0.656652 0 0 1 0\n103 0 -0.412928 1.445476 0 0 1 0\n70 0 0.452749 1.603135 0 0 1 0\n23 0 0.799020 -0.525269 0 0 0 1\n118 1 1.837832 0.814837 1 0 0 0\n\n\n admit gre gpa rank_1 rank_2 rank_3 rank_4\n127 0 1.318426 0.919944 0 0 0 1\n286 1 1.837832 -0.446439 1 0 0 0\n79 1 0.279614 1.603135 1 0 0 0\n336 0 -0.586063 -0.630376 0 0 1 0\n236 1 0.625884 0.263029 0 1 0 0\n"
],
[
"features, targets = data.drop('admit', axis = 1), data['admit']\nfeatures_test, targets_test = test_data.drop('admit', axis = 1), test_data['admit']\nfeatures_test.head()",
"_____no_output_____"
],
[
"n_records, n_features = features.shape",
"_____no_output_____"
],
[
"last_loss = None\nprint(n_records, n_features)",
"261 6\n"
],
[
"weights = np.random.normal(scale = 1 / n_features**.5, size = n_features)\nepochs = 1000\nlearnrate = 0.5\nprint(weights)",
"[ 0.49372812 0.21962125 1.11632317 0.03827958 -0.57416255 -0.01409667]\n"
],
[
"for e in range(epochs):\n del_w = np.zeros(weights.shape)\n for x, y in zip(features.values, targets):\n output = sigmoid(np.dot(x, weights))\n error = y - output\n error_term = error * sigmoid_prime(output)\n del_w += error_term * x\n weights += learnrate * del_w / n_records\n \n if e % (epochs / 10) == 0:\n out = sigmoid(np.dot(features, weights))\n loss = np.mean((out - targets) ** 2)\n if last_loss and last_loss <= loss:\n print('Train loss:', loss, \" WARNING Loss Increasing\")\n else:\n print('Train loss:', loss)\n last_loss = loss",
"Train loss: 0.192552179947 WARNING Loss Increasing\nTrain loss: 0.192552218203 WARNING Loss Increasing\nTrain loss: 0.192552243951 WARNING Loss Increasing\nTrain loss: 0.192552261359 WARNING Loss Increasing\nTrain loss: 0.192552273194 WARNING Loss Increasing\nTrain loss: 0.192552281292 WARNING Loss Increasing\nTrain loss: 0.192552286872 WARNING Loss Increasing\nTrain loss: 0.192552290747 WARNING Loss Increasing\nTrain loss: 0.19255229346 WARNING Loss Increasing\nTrain loss: 0.192552295377 WARNING Loss Increasing\n"
],
[
"test_out = sigmoid(np.dot(features_test, weights))\npredictions = test_out > 0.5\naccuracy = np.mean(predictions == targets_test)\nprint('Prediction accuracy: {:.3f}'.format(accuracy))",
"Prediction accuracy: 0.600\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0d968d1f00c33e1bfcc30ac43f4c2f482eb6a02 | 86,171 | ipynb | Jupyter Notebook | bonus.ipynb | SandraTeh/sql-challenge | 4f2773517e92521ee436313503569ebce30a3c5d | [
"ADSL"
] | null | null | null | bonus.ipynb | SandraTeh/sql-challenge | 4f2773517e92521ee436313503569ebce30a3c5d | [
"ADSL"
] | null | null | null | bonus.ipynb | SandraTeh/sql-challenge | 4f2773517e92521ee436313503569ebce30a3c5d | [
"ADSL"
] | null | null | null | 48.98863 | 18,856 | 0.621543 | [
[
[
"#import dependencies\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom matplotlib.ticker import StrMethodFormatter\nimport numpy as np\nfrom config import username,password\nfrom sqlalchemy import create_engine",
"_____no_output_____"
],
[
"#create engine\nengine = create_engine(f'postgresql://{username}:{password}@localhost:5432/employees')\nengine.begin()\nconnection = engine.connect()",
"_____no_output_____"
],
[
"employees = pd.read_sql('select * from employees', connection)\nemployees",
"_____no_output_____"
]
],
[
[
"# Create a histogram to visualize the most common salary ranges for employees.",
"_____no_output_____"
]
],
[
[
"#display salaries table\nsalaries = pd.read_sql('select * from salaries', connection)\nsalaries.head()",
"_____no_output_____"
],
[
"#finding the maximum value of salary\nmax_salary = salaries[\"salary\"].max()\nmax_salary",
"_____no_output_____"
],
[
"#finding the minimum value of salary\nmin_salary = salaries[\"salary\"].min()\nmin_salary",
"_____no_output_____"
],
[
"#calculate gap for 6 bins\nbin_value = (max_salary - min_salary)/5\nbin_value\n\n#set bin value increase by $18K for each bin",
"_____no_output_____"
],
[
"salary_df = salaries[\"salary\"]\nsalary_df",
"_____no_output_____"
],
[
"#create histogram for salary\n#set bins_list\n\nbins_list = [40000,58000,76000,94000,112000,130000]\nax = salary_df.hist(bins=bins_list)\n\nplt.grid(axis='y', alpha=0.75)\nplt.xlabel('Salary Range',fontsize=15)\nplt.ylabel('Frequency',fontsize=15)\nplt.xticks(fontsize=15)\nplt.yticks(fontsize=15)\nplt.ylabel('Frequency',fontsize=15)\nplt.title('Most Common Salary Range',fontsize=15)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Create a bar chart of average salary by title.",
"_____no_output_____"
]
],
[
[
"#display salaries table\nsalaries = pd.read_sql('select * from salaries', connection)\nsalaries.head()",
"_____no_output_____"
],
[
"#display titles table\ntitles = pd.read_sql('select * from titles', connection)\ntitles.head()",
"_____no_output_____"
],
[
"#display employees table\nemployees= pd.read_sql('select * from employees', connection)\nemployees.head()",
"_____no_output_____"
],
[
"#Merge salaries table and employees table\ncombined_df = pd.merge(salaries, employees, on=\"emp_no\")\ncombined_df",
"_____no_output_____"
],
[
"#rename emp_title_id to title_id\ncombined_df = combined_df.rename(columns={\"emp_title_id\": \"title_id\"})",
"_____no_output_____"
],
[
"#display combined df\ncombined_df.head()",
"_____no_output_____"
],
[
"#combine combined df with titles table\ncombined_df = pd.merge(combined_df, titles, on=\"title_id\")\ncombined_df",
"_____no_output_____"
],
[
"#create a dataframe group by title\nsalary_by_title = combined_df.groupby(\"title\").mean()\nsalary_by_title",
"_____no_output_____"
],
[
"#drop emp_no column\nsalary_by_title = salary_by_title.drop(columns = \"emp_no\")\nsalary_by_title",
"_____no_output_____"
],
[
"# Reset Index\nsalary_by_title = salary_by_title.reset_index()\nsalary_by_title",
"_____no_output_____"
],
[
"# x_axis, y_axis & Tick Locations\nx_axis = salary_by_title[\"title\"]\nticks = np.arange(len(x_axis))\ny_axis = salary_by_title[\"salary\"]\n \n# Create Bar Chart Based on Above Data\nplt.bar(x_axis, y_axis, align=\"center\", alpha=0.5)\n\n# Create Ticks for Bar Chart's x_axis\nplt.xticks(ticks, x_axis, rotation=\"vertical\")\n\n# Set Labels & Title\nplt.ylabel(\"Salaries\")\nplt.xlabel(\"Titles\")\nplt.title(\"Average Salary by Title\")\n\n# Show plot\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Analysis\n\n1. Most of employees are paid within the salary range of 40,000 to 60,000\n2. The average salary by title is between 45,000 to 60,000\n3. The histogram shows that there is salary which is paid beyond the average salay.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0d97429a4813239d882c4fb71fad5dec64407eb | 722 | ipynb | Jupyter Notebook | AtCoder/20210515/C.ipynb | asha-ndf/asha-public-code | f53929513f9827a13e9ba9efafc85d4ada9e767a | [
"MIT"
] | null | null | null | AtCoder/20210515/C.ipynb | asha-ndf/asha-public-code | f53929513f9827a13e9ba9efafc85d4ada9e767a | [
"MIT"
] | null | null | null | AtCoder/20210515/C.ipynb | asha-ndf/asha-public-code | f53929513f9827a13e9ba9efafc85d4ada9e767a | [
"MIT"
] | null | null | null | 722 | 722 | 0.538781 | [
[
[
"S = input()\na = 0\nb = 0\nfor i in S:\n if i == 'o':\n a = a+1\n elif i == '?':\n b = b+1\n\nif a>4:\n print(0)\nelif a ==4:\n print(24)\nelif a ==3:\n print(24*b + 36)\nelif a == 2:\n print((b+2)**4+2*((b+1)**4) + b**4)\nelif a == 1:\n print((b+1)**4 - b**4)\nelse:\n print(b**4)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d0d97bfc226cfaaaeef437cb0fdafd48a07e617a | 876,815 | ipynb | Jupyter Notebook | notebooks/09-Measles.ipynb | pydatawrangler/altair_notebooks | d2f7b7200fb448fe53874035e91161e7929f8034 | [
"BSD-3-Clause"
] | 179 | 2017-10-07T14:48:31.000Z | 2022-02-12T04:06:48.000Z | notebooks/09-Measles.ipynb | pydatawrangler/altair_notebooks | d2f7b7200fb448fe53874035e91161e7929f8034 | [
"BSD-3-Clause"
] | 19 | 2017-10-27T12:35:02.000Z | 2021-03-01T06:56:44.000Z | notebooks/09-Measles.ipynb | pydatawrangler/altair_notebooks | d2f7b7200fb448fe53874035e91161e7929f8034 | [
"BSD-3-Clause"
] | 94 | 2017-09-27T22:09:15.000Z | 2022-01-24T09:58:41.000Z | 940.788627 | 282,748 | 0.561846 | [
[
[
"# Measles Incidence in Altair",
"_____no_output_____"
],
[
"This is an example of reproducing the Wall Street Journal's famous [Measles Incidence Plot](http://graphics.wsj.com/infectious-diseases-and-vaccines/#b02g20t20w15) in Python using [Altair](http://github.com/ellisonbg/altair/).",
"_____no_output_____"
],
[
"## The Data\n\nWe'll start by downloading the data. Fortunately, others have made the data available in an easily digestible form; a github search revealed the dataset in CSV format here:",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nurl = 'https://raw.githubusercontent.com/blmoore/blogR/master/data/measles_incidence.csv'\ndata = pd.read_csv(url, skiprows=2, na_values='-')\ndata.head()",
"_____no_output_____"
]
],
[
[
"## Data Munging with Pandas",
"_____no_output_____"
],
[
"This data needs to be cleaned-up a bit; we can do this with the Pandas library.\nWe first need to aggregate the incidence data by year:",
"_____no_output_____"
]
],
[
[
"annual = data.drop('WEEK', axis=1).groupby('YEAR').sum()\nannual.head()",
"_____no_output_____"
]
],
[
[
"Next, because Altair is built to handle data where each row corresponds to a single sample, we will stack the data, re-labeling the columns for clarity:",
"_____no_output_____"
]
],
[
[
"measles = annual.reset_index()\nmeasles = measles.melt('YEAR', var_name='state', value_name='incidence')\nmeasles.head()",
"_____no_output_____"
]
],
[
[
"## Initial Visualization",
"_____no_output_____"
],
[
"Now we can use Altair's syntax for generating a heat map:",
"_____no_output_____"
]
],
[
[
"import altair as alt",
"_____no_output_____"
],
[
"alt.Chart(measles).mark_rect().encode(\n x='YEAR:O',\n y='state:N',\n color='incidence'\n).properties(\n width=600,\n height=400\n)",
"_____no_output_____"
]
],
[
[
"## Adjusting Aesthetics",
"_____no_output_____"
],
[
"All operative components of the visualization appear above, we now just have to adjust the aesthetic features to reproduce the original plot.\nAltair allows a wide range of flexibility for such adjustments, including size and color of markings, axis labels and titles, and more.\n\nHere is the data visualized again with a number of these adjustments:",
"_____no_output_____"
]
],
[
[
"# Define a custom colormape using Hex codes & HTML color names\ncolormap = alt.Scale(domain=[0, 100, 200, 300, 1000, 3000],\n range=['#F0F8FF', 'cornflowerblue', 'mediumseagreen', '#FFEE00', 'darkorange', 'firebrick'],\n type='sqrt')\n\nalt.Chart(measles).mark_rect().encode(\n alt.X('YEAR:O', axis=alt.Axis(title=None, ticks=False)),\n alt.Y('state:N', axis=alt.Axis(title=None, ticks=False)),\n alt.Color('incidence:Q', sort='ascending', scale=colormap, legend=None)\n).properties(\n width=800,\n height=500\n)",
"_____no_output_____"
]
],
[
[
"The result clearly shows the impact of the the measles vaccine introduced in the mid-1960s.",
"_____no_output_____"
],
[
"## Layering & Selections\n\nHere is another view of the data, using layering and selections to allow zooming-in",
"_____no_output_____"
]
],
[
[
"hover = alt.selection_single(on='mouseover', nearest=True, fields=['state'], empty='none')\n\nline = alt.Chart().mark_line().encode(\n alt.X('YEAR:Q',\n scale=alt.Scale(zero=False),\n axis=alt.Axis(format='f', title='year')\n ),\n alt.Y('incidence:Q', axis=alt.Axis(title='measles incidence')),\n detail='state:N',\n opacity=alt.condition(hover, alt.value(1.0), alt.value(0.1))\n).properties(\n width=800,\n height=300\n)\n\npoint = line.mark_point().encode(\n opacity=alt.value(0.0)\n).properties(\n selection=hover\n)\n\nmean = alt.Chart().mark_line().encode(\n x=alt.X('YEAR:Q', scale=alt.Scale(zero=False)),\n y='mean(incidence):Q',\n color=alt.value('black')\n)\n\ntext = alt.Chart().mark_text(align='right').encode(\n x='min(YEAR):Q',\n y='mean(incidence):Q',\n text='state:N',\n detail='state:N',\n opacity=alt.condition(hover, alt.value(1.0), alt.value(0.0))\n)\n\nalt.layer(point, line, mean, text, data=measles).interactive(bind_y=False)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
d0d99d2178a60d2bbde4d4af587e2f220f379751 | 43,369 | ipynb | Jupyter Notebook | .ipynb_checkpoints/Sparkify-checkpoint.ipynb | rabadzhiyski/projectSpark | 9f605a44e616d7dfd50fd1e548809219a1be94a8 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/Sparkify-checkpoint.ipynb | rabadzhiyski/projectSpark | 9f605a44e616d7dfd50fd1e548809219a1be94a8 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/Sparkify-checkpoint.ipynb | rabadzhiyski/projectSpark | 9f605a44e616d7dfd50fd1e548809219a1be94a8 | [
"MIT"
] | null | null | null | 65.512085 | 4,336 | 0.556158 | [
[
[
"# Sparkify Project Workspace\nThis workspace contains a tiny subset (128MB) of the full dataset available (12GB). Feel free to use this workspace to build your project, or to explore a smaller subset with Spark before deploying your cluster on the cloud. Instructions for setting up your Spark cluster is included in the last lesson of the Extracurricular Spark Course content.\n\nYou can follow the steps below to guide your data analysis and model building portion of this project.",
"_____no_output_____"
]
],
[
[
"# import libraries\nimport seaborn as sns\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom pyspark.sql import SparkSession\nfrom pyspark.ml.feature import RegexTokenizer, VectorAssembler, Normalizer, StandardScaler\nfrom pyspark.sql.functions import isnan, when, count, col\nfrom pyspark.sql.functions import udf\nfrom pyspark.sql.types import IntegerType",
"_____no_output_____"
],
[
"%%timeit\n# create a Spark session\nspark = SparkSession \\\n .builder \\\n .appName(\"Data Wrangling\") \\\n .getOrCreate()\n\n# to check spark WebUI: http://localhost:4040/jobs/ or call directly 'spark' to get a link",
"252 µs ± 9.64 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\n"
],
[
"spark",
"_____no_output_____"
]
],
[
[
"# Load and Clean Dataset\nIn this workspace, the mini-dataset file is `mini_sparkify_event_data.json`. Load and clean the dataset, checking for invalid or missing data - for example, records without userids or sessionids. ",
"_____no_output_____"
]
],
[
[
"pwd",
"_____no_output_____"
],
[
"path = '/home/freemo/Projects/largeData/mini_sparkify_event_data.json'",
"_____no_output_____"
],
[
"df = spark.read.json(file)",
"_____no_output_____"
],
[
"df.take(1)",
"_____no_output_____"
],
[
"print((df.count(), len(df.columns)))",
"_____no_output_____"
],
[
"mising_values.show()",
"_____no_output_____"
],
[
"df.show(5)",
"+----------------+---------+---------+------+-------------+--------+---------+-----+--------------------+------+--------+-------------+---------+--------------------+------+-------------+--------------------+------+\n| artist| auth|firstName|gender|itemInSession|lastName| length|level| location|method| page| registration|sessionId| song|status| ts| userAgent|userId|\n+----------------+---------+---------+------+-------------+--------+---------+-----+--------------------+------+--------+-------------+---------+--------------------+------+-------------+--------------------+------+\n| Martha Tilston|Logged In| Colin| M| 50| Freeman|277.89016| paid| Bakersfield, CA| PUT|NextSong|1538173362000| 29| Rockpools| 200|1538352117000|Mozilla/5.0 (Wind...| 30|\n|Five Iron Frenzy|Logged In| Micah| M| 79| Long|236.09424| free|Boston-Cambridge-...| PUT|NextSong|1538331630000| 8| Canada| 200|1538352180000|\"Mozilla/5.0 (Win...| 9|\n| Adam Lambert|Logged In| Colin| M| 51| Freeman| 282.8273| paid| Bakersfield, CA| PUT|NextSong|1538173362000| 29| Time For Miracles| 200|1538352394000|Mozilla/5.0 (Wind...| 30|\n| Enigma|Logged In| Micah| M| 80| Long|262.71302| free|Boston-Cambridge-...| PUT|NextSong|1538331630000| 8|Knocking On Forbi...| 200|1538352416000|\"Mozilla/5.0 (Win...| 9|\n| Daft Punk|Logged In| Colin| M| 52| Freeman|223.60771| paid| Bakersfield, CA| PUT|NextSong|1538173362000| 29|Harder Better Fas...| 200|1538352676000|Mozilla/5.0 (Wind...| 30|\n+----------------+---------+---------+------+-------------+--------+---------+-----+--------------------+------+--------+-------------+---------+--------------------+------+-------------+--------------------+------+\nonly showing top 5 rows\n\n"
],
[
"spark.conf.set('spark.sql.repl.eagerEval.enabled', True)",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"df.show(1, vertical=True)",
"-RECORD 0-----------------------------\n artist | Martha Tilston \n auth | Logged In \n firstName | Colin \n gender | M \n itemInSession | 50 \n lastName | Freeman \n length | 277.89016 \n level | paid \n location | Bakersfield, CA \n method | PUT \n page | NextSong \n registration | 1538173362000 \n sessionId | 29 \n song | Rockpools \n status | 200 \n ts | 1538352117000 \n userAgent | Mozilla/5.0 (Wind... \n userId | 30 \nonly showing top 1 row\n\n"
],
[
"df.printSchema()",
"root\n |-- artist: string (nullable = true)\n |-- auth: string (nullable = true)\n |-- firstName: string (nullable = true)\n |-- gender: string (nullable = true)\n |-- itemInSession: long (nullable = true)\n |-- lastName: string (nullable = true)\n |-- length: double (nullable = true)\n |-- level: string (nullable = true)\n |-- location: string (nullable = true)\n |-- method: string (nullable = true)\n |-- page: string (nullable = true)\n |-- registration: long (nullable = true)\n |-- sessionId: long (nullable = true)\n |-- song: string (nullable = true)\n |-- status: long (nullable = true)\n |-- ts: long (nullable = true)\n |-- userAgent: string (nullable = true)\n |-- userId: string (nullable = true)\n\n"
],
[
"df.select([count(when(col(c).isNull(), c)).alias(c) for c in df.columns]).show()",
"+------+----+---------+------+-------------+--------+------+-----+--------+------+----+------------+---------+-----+------+---+---------+------+\n|artist|auth|firstName|gender|itemInSession|lastName|length|level|location|method|page|registration|sessionId| song|status| ts|userAgent|userId|\n+------+----+---------+------+-------------+--------+------+-----+--------+------+----+------------+---------+-----+------+---+---------+------+\n| 58392| 0| 8346| 8346| 0| 8346| 58392| 0| 8346| 0| 0| 8346| 0|58392| 0| 0| 8346| 0|\n+------+----+---------+------+-------------+--------+------+-----+--------+------+----+------------+---------+-----+------+---+---------+------+\n\n"
],
[
"df.select([count(when(isnan(c) | col(c).isNull(), c)).alias(c) for c in df.columns]).show()",
"+------+----+---------+------+-------------+--------+------+-----+--------+------+----+------------+---------+-----+------+---+---------+------+\n|artist|auth|firstName|gender|itemInSession|lastName|length|level|location|method|page|registration|sessionId| song|status| ts|userAgent|userId|\n+------+----+---------+------+-------------+--------+------+-----+--------+------+----+------------+---------+-----+------+---+---------+------+\n| 58392| 0| 8346| 8346| 0| 8346| 58392| 0| 8346| 0| 0| 8346| 0|58392| 0| 0| 8346| 0|\n+------+----+---------+------+-------------+--------+------+-----+--------+------+----+------------+---------+-----+------+---+---------+------+\n\n"
],
[
"# missing values in userID\ndf.select([count(when(isnan('userID'),True))]).show()",
"+--------------------------------------------+\n|count(CASE WHEN isnan(userID) THEN true END)|\n+--------------------------------------------+\n| 0|\n+--------------------------------------------+\n\n"
],
[
"# missing values in sessionID\ndf.select([count(when(isnan('sessionID'),True))]).show()",
"+-----------------------------------------------+\n|count(CASE WHEN isnan(sessionID) THEN true END)|\n+-----------------------------------------------+\n| 0|\n+-----------------------------------------------+\n\n"
],
[
"df.select([count(when(isnan('userID') | col('userID').isNull() , True))]).show()",
"+------------------------------------------------------------------+\n|count(CASE WHEN (isnan(userID) OR (userID IS NULL)) THEN true END)|\n+------------------------------------------------------------------+\n| 0|\n+------------------------------------------------------------------+\n\n"
],
[
"df.select([count(when(isnan('sessionID') | col('sessionID').isNull() , True))]).show()",
"+------------------------------------------------------------------------+\n|count(CASE WHEN (isnan(sessionID) OR (sessionID IS NULL)) THEN true END)|\n+------------------------------------------------------------------------+\n| 0|\n+------------------------------------------------------------------------+\n\n"
],
[
"df.distinct().show()",
"+---------------+----------+---------+------+-------------+--------+---------+-----+--------------------+------+-----------+-------------+---------+--------------------+------+-------------+--------------------+------+\n| artist| auth|firstName|gender|itemInSession|lastName| length|level| location|method| page| registration|sessionId| song|status| ts| userAgent|userId|\n+---------------+----------+---------+------+-------------+--------+---------+-----+--------------------+------+-----------+-------------+---------+--------------------+------+-------------+--------------------+------+\n|The Futureheads| Logged In| Ainsley| F| 80| Farley|120.81587| free|McAllen-Edinburg-...| PUT| NextSong|1538304455000| 187| Robot| 200|1538373133000|\"Mozilla/5.0 (Win...| 78|\n| null| Logged In| Ainsley| F| 81| Farley| null| free|McAllen-Edinburg-...| PUT|Thumbs Down|1538304455000| 187| null| 307|1538373134000|\"Mozilla/5.0 (Win...| 78|\n|O'Rosko Raricim| Logged In| Madison| F| 74| Morales| 90.56608| paid|Tampa-St. Petersb...| PUT| NextSong|1536287099000| 222| Terre Promise| 200|1538384924000|\"Mozilla/5.0 (Mac...| 25|\n| null|Logged Out| null| null| 1| null| null| free| null| GET| Help| null| 246| null| 200|1538388594000| null| |\n| Morgan Page| Logged In| Panav| M| 23| Myers|235.54567| paid| Morgantown, WV| PUT| NextSong|1538239045000| 237| Strange Condition| 200|1538394878000|\"Mozilla/5.0 (Win...| 15|\n| null|Logged Out| null| null| 0| null| null| paid| null| GET| Home| null| 236| null| 200|1538398122000| null| |\n| null| Logged In| Ethan| M| 15| Raymond| null| free|Hartford-West Har...| GET| Home|1534245996000| 26| null| 200|1538402423000|\"Mozilla/5.0 (Win...| 27|\n| Corona| Logged In| Ethan| M| 43| Johnson|210.36363| paid|Lexington-Fayette...| PUT| NextSong|1538080987000| 236| Rhythm Of The Night| 200|1538405355000|\"Mozilla/5.0 (Win...| 51|\n| Taylor Swift| Logged In| Joseph| M| 13|Phillips|230.47791| free|Allentown-Bethleh...| PUT| NextSong|1537484200000| 202| You Belong With Me| 200|1538411597000|Mozilla/5.0 (comp...| 93|\n| Martin Jondo| Logged In| Oliver| M| 8| Gilbert|266.34404| paid|Philadelphia-Camd...| PUT| NextSong|1535093367000| 225| All I Ever Know| 200|1538413737000|\"Mozilla/5.0 (Mac...| 81|\n| null| Logged In| Alexi| F| 125| Warren| null| paid|Spokane-Spokane V...| GET| Home|1532482662000| 260| null| 200|1538420695000|Mozilla/5.0 (Wind...| 54|\n| Slim Dusty| Logged In|Sebastian| M| 17| Wang| 198.922| free| Jackson, MS| PUT| NextSong|1538050164000| 52| Long Black Road| 200|1538422228000|\"Mozilla/5.0 (Mac...| 53|\n| null| Logged In| Ethan| M| 126| Johnson| null| paid|Lexington-Fayette...| GET| Settings|1538080987000| 236| null| 200|1538422909000|\"Mozilla/5.0 (Win...| 51|\n| Rise Against| Logged In| Lina| F| 7| Francis|242.25914| free|Los Angeles-Long ...| PUT| NextSong|1536948181000| 215| Savior| 200|1538424576000|Mozilla/5.0 (Wind...| 115|\n| null| Logged In| Lina| F| 17| Francis| null| free|Los Angeles-Long ...| GET|Roll Advert|1536948181000| 215| null| 200|1538426548000|Mozilla/5.0 (Wind...| 115|\n| null| Logged In| Sawyer| M| 41| Larson| null| free|Houston-The Woodl...| GET|Roll Advert|1538069638000| 302| null| 200|1538431064000|\"Mozilla/5.0 (Mac...| 98|\n| El Chojin| Logged In| Grant| M| 324| Flores|169.27302| paid|New York-Newark-J...| PUT| NextSong|1538120859000| 141| Mal Dia| 200|1538435220000|\"Mozilla/5.0 (Mac...| 142|\n| JoJo| Logged In| Riley| F| 10| Taylor|223.50322| free|Boston-Cambridge-...| PUT| NextSong|1536403972000| 298| Exceptional| 200|1538436432000|\"Mozilla/5.0 (iPa...| 92|\n| Corona| Logged In| Nicole| F| 154| Beck|226.01098| paid|Vineland-Bridgeto...| PUT| NextSong|1532224335000| 123| Baby Baby| 200|1538439918000|\"Mozilla/5.0 (Mac...| 124|\n| Vonda Shepard| Logged In| Kaleb| M| 27|Thompson|280.76363| free|Los Angeles-Long ...| PUT| NextSong|1536988041000| 28|Baby_ Don't You B...| 200|1538443407000|\"Mozilla/5.0 (Mac...| 29|\n+---------------+----------+---------+------+-------------+--------+---------+-----+--------------------+------+-----------+-------------+---------+--------------------+------+-------------+--------------------+------+\nonly showing top 20 rows\n\n"
]
],
[
[
"# Exploratory Data Analysis\nWhen you're working with the full dataset, perform EDA by loading a small subset of the data and doing basic manipulations within Spark. In this workspace, you are already provided a small subset of data you can explore.\n\n### Define Churn\n\nOnce you've done some preliminary analysis, create a column `Churn` to use as the label for your model. I suggest using the `Cancellation Confirmation` events to define your churn, which happen for both paid and free users. As a bonus task, you can also look into the `Downgrade` events.\n\n### Explore Data\nOnce you've defined churn, perform some exploratory data analysis to observe the behavior for users who stayed vs users who churned. You can start by exploring aggregates on these two groups of users, observing how much of a specific action they experienced per a certain time unit or number of songs played.",
"_____no_output_____"
],
[
"# Feature Engineering\nOnce you've familiarized yourself with the data, build out the features you find promising to train your model on. To work with the full dataset, you can follow the following steps.\n- Write a script to extract the necessary features from the smaller subset of data\n- Ensure that your script is scalable, using the best practices discussed in Lesson 3\n- Try your script on the full data set, debugging your script if necessary\n\nIf you are working in the classroom workspace, you can just extract features based on the small subset of data contained here. Be sure to transfer over this work to the larger dataset when you work on your Spark cluster.",
"_____no_output_____"
],
[
"# Modeling\nSplit the full dataset into train, test, and validation sets. Test out several of the machine learning methods you learned. Evaluate the accuracy of the various models, tuning parameters as necessary. Determine your winning model based on test accuracy and report results on the validation set. Since the churned users are a fairly small subset, I suggest using F1 score as the metric to optimize.",
"_____no_output_____"
],
[
"# Final Steps\nClean up your code, adding comments and renaming variables to make the code easier to read and maintain. Refer to the Spark Project Overview page and Data Scientist Capstone Project Rubric to make sure you are including all components of the capstone project and meet all expectations. Remember, this includes thorough documentation in a README file in a Github repository, as well as a web app or blog post.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0d9a32180d924c63e132d0fb25d79da9128e463 | 12,747 | ipynb | Jupyter Notebook | jupyter notebooks/Analysis/Stocks/.ipynb_checkpoints/Portfolio Analysis-checkpoint 5.ipynb | rahulspsec/JupyterNB_PortfolioTools | f24a41aa04ad0516cd2fa6f56277c300cb4858da | [
"MIT"
] | null | null | null | jupyter notebooks/Analysis/Stocks/.ipynb_checkpoints/Portfolio Analysis-checkpoint 5.ipynb | rahulspsec/JupyterNB_PortfolioTools | f24a41aa04ad0516cd2fa6f56277c300cb4858da | [
"MIT"
] | null | null | null | jupyter notebooks/Analysis/Stocks/.ipynb_checkpoints/Portfolio Analysis-checkpoint 5.ipynb | rahulspsec/JupyterNB_PortfolioTools | f24a41aa04ad0516cd2fa6f56277c300cb4858da | [
"MIT"
] | null | null | null | 52.241803 | 670 | 0.678983 | [
[
[
"# Install package\n%pip install --upgrade portfoliotools",
"Requirement already up-to-date: portfoliotools in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (1.0.12)\nRequirement already satisfied, skipping upgrade: beautifulsoup4 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from portfoliotools) (4.8.1)\nRequirement already satisfied, skipping upgrade: pandas in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from portfoliotools) (0.25.1)\nRequirement already satisfied, skipping upgrade: numpy in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from portfoliotools) (1.17.1)\nRequirement already satisfied, skipping upgrade: mftool in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from portfoliotools) (1.3)\nRequirement already satisfied, skipping upgrade: requests in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from portfoliotools) (2.22.0)\nRequirement already satisfied, skipping upgrade: pandas-datareader in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from portfoliotools) (0.7.4)\nRequirement already satisfied, skipping upgrade: plotly in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from portfoliotools) (4.1.1)\nRequirement already satisfied, skipping upgrade: stocktrends in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from portfoliotools) (0.1.4)\nRequirement already satisfied, skipping upgrade: scipy in /Users/rahuljain/Library/Python/3.7/lib/python/site-packages (from portfoliotools) (1.4.1)\nRequirement already satisfied, skipping upgrade: scikit-learn in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from portfoliotools) (0.21.3)\nRequirement already satisfied, skipping upgrade: seaborn in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from portfoliotools) (0.9.0)\nRequirement already satisfied, skipping upgrade: statsmodels in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from portfoliotools) (0.10.1)\nRequirement already satisfied, skipping upgrade: nsepy in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from portfoliotools) (0.8)\nRequirement already satisfied, skipping upgrade: nsetools in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from portfoliotools) (1.0.11)\nRequirement already satisfied, skipping upgrade: soupsieve>=1.2 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from beautifulsoup4->portfoliotools) (1.9.4)\nRequirement already satisfied, skipping upgrade: python-dateutil>=2.6.1 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pandas->portfoliotools) (2.8.0)\nRequirement already satisfied, skipping upgrade: pytz>=2017.2 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from pandas->portfoliotools) (2019.2)\nRequirement already satisfied, skipping upgrade: bs4 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from mftool->portfoliotools) (0.0.1)\nRequirement already satisfied, skipping upgrade: chardet<3.1.0,>=3.0.2 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from requests->portfoliotools) (3.0.4)\nRequirement already satisfied, skipping upgrade: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /Users/rahuljain/Library/Python/3.7/lib/python/site-packages (from requests->portfoliotools) (1.25.8)\nRequirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from requests->portfoliotools) (2019.9.11)\nRequirement already satisfied, skipping upgrade: idna<2.9,>=2.5 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from requests->portfoliotools) (2.8)\nRequirement already satisfied, skipping upgrade: lxml in /Users/rahuljain/Library/Python/3.7/lib/python/site-packages (from pandas-datareader->portfoliotools) (4.6.3)\nRequirement already satisfied, skipping upgrade: wrapt in /Users/rahuljain/Library/Python/3.7/lib/python/site-packages (from pandas-datareader->portfoliotools) (1.11.2)\nRequirement already satisfied, skipping upgrade: retrying>=1.3.3 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from plotly->portfoliotools) (1.3.3)\nRequirement already satisfied, skipping upgrade: six in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from plotly->portfoliotools) (1.15.0)\nRequirement already satisfied, skipping upgrade: joblib>=0.11 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from scikit-learn->portfoliotools) (0.13.2)\nRequirement already satisfied, skipping upgrade: matplotlib>=1.4.3 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from seaborn->portfoliotools) (3.1.1)\nRequirement already satisfied, skipping upgrade: patsy>=0.4.0 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from statsmodels->portfoliotools) (0.5.1)\nRequirement already satisfied, skipping upgrade: click in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from nsepy->portfoliotools) (7.1.2)\nRequirement already satisfied, skipping upgrade: dateutils in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from nsetools->portfoliotools) (0.6.8)\nRequirement already satisfied, skipping upgrade: cycler>=0.10 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from matplotlib>=1.4.3->seaborn->portfoliotools) (0.10.0)\nRequirement already satisfied, skipping upgrade: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from matplotlib>=1.4.3->seaborn->portfoliotools) (2.4.2)\nRequirement already satisfied, skipping upgrade: kiwisolver>=1.0.1 in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from matplotlib>=1.4.3->seaborn->portfoliotools) (1.1.0)\nRequirement already satisfied, skipping upgrade: argparse in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from dateutils->nsetools->portfoliotools) (1.4.0)\nRequirement already satisfied, skipping upgrade: setuptools in /Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages (from kiwisolver>=1.0.1->matplotlib>=1.4.3->seaborn->portfoliotools) (45.2.0)\n\u001b[33mWARNING: You are using pip version 19.2.3, however version 21.1.2 is available.\nYou should consider upgrading via the 'pip install --upgrade pip' command.\u001b[0m\nNote: you may need to restart the kernel to use updated packages.\n"
],
[
"from portfoliotools.screener.stock_screener import StockScreener\nfrom portfoliotools.screener.utility.util import get_ticker_list, getHistoricStockPrices, get_nse_index_list, get_port_ret_vol_sr\nfrom portfoliotools.screener.stock_screener import PortfolioStrategy, ADF_Reg_Model\nfrom tqdm import tqdm\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport plotly.graph_objects as go\nimport matplotlib.pyplot as plt\nfrom IPython.display import display_html\nfrom pandas.plotting import register_matplotlib_converters\nfrom plotly.subplots import make_subplots\nfrom datetime import datetime\nimport warnings\nwarnings.filterwarnings(\"ignore\")\nregister_matplotlib_converters()\n%matplotlib inline\nsns.set()\npd.options.display.max_columns = None\npd.options.display.max_rows = None",
"_____no_output_____"
]
],
[
[
"### <font color = 'red'>USER INPUT</font>",
"_____no_output_____"
]
],
[
[
"tickers = get_ticker_list()\nasset_list = [ticker['Ticker'] for ticker in tickers]\n#asset_list.remove(\"GAIL\")\n#asset_list = ['ICICIBANK', 'HDFC', 'HDFCBANK', 'INFY', 'RELIANCE', 'ASIANPAINT', 'TCS', 'MARUTI', 'TATAMOTORS', 'CIPLA']\nasset_list = [\"ABBOTINDIA\", \"ACC\", \"ADANIENT\", \"ADANIGREEN\", \"ADANITRANS\", \"ALKEM\", \"AMBUJACEM\", \"APOLLOHOSP\", \"AUROPHARMA\", \"DMART\", \"BAJAJHLDNG\", \"BANDHANBNK\", \"BERGEPAINT\", \"BIOCON\", \"BOSCHLTD\", \"CADILAHC\", \"COLPAL\", \"DABUR\", \"DLF\", \"GAIL\", \"GODREJCP\", \"HAVELLS\", \"HDFCAMC\", \"HINDPETRO\", \"ICICIGI\", \"ICICIPRULI\", \"IGL\", \"INDUSTOWER\", \"NAUKRI\", \"INDIGO\", \"JUBLFOOD\", \"LTI\", \"LUPIN\", \"MARICO\", \"MOTHERSUMI\", \"MRF\", \"MUTHOOTFIN\", \"NMDC\", \"PETRONET\", \"PIDILITIND\", \"PEL\", \"PGHH\", \"PNB\", \"SBICARD\", \"SIEMENS\", \"TORNTPHARM\", \"UBL\", \"MCDOWELL-N\", \"VEDL\", \"YESBANK\"]\n\ncob = None #datetime(2020,2,1) # COB Date",
"_____no_output_____"
]
],
[
[
"### <font color = 'blue'>Portfolio Strategy Tools</font>",
"_____no_output_____"
]
],
[
[
"strat = PortfolioStrategy(asset_list, period = 1000, cob = cob)",
"_____no_output_____"
]
],
[
[
"#### Correlation Matrix",
"_____no_output_____"
]
],
[
[
"fig = strat.plot_correlation_matrix()\nfig.show()",
"_____no_output_____"
]
],
[
[
"#### Correlated Stocks",
"_____no_output_____"
]
],
[
[
"corr_pair = strat.get_correlation_pair()\nprint(\"Highly Correlated:\")\ndisplay_html(corr_pair[corr_pair['Correlation'] > .95])\nprint(\"\\nInversely Correlated:\")\ndisplay_html(corr_pair[corr_pair['Correlation'] < -.90])",
"_____no_output_____"
]
],
[
[
"#### <font color = 'black'>Efficient Frontier</font>",
"_____no_output_____"
]
],
[
[
"market_portfolio = strat.get_efficient_market_portfolio(plot = True, num_ports = 500, show_frontier =True) #plot =True to plot Frontier",
"_____no_output_____"
]
],
[
[
"#### Market Portfolio",
"_____no_output_____"
]
],
[
[
"market_portfolio = market_portfolio.T\nmarket_portfolio = market_portfolio[market_portfolio['MP'] != 0.00]\nplt.pie(market_portfolio.iloc[:-3,0], labels=market_portfolio.index.tolist()[:-3])\nmarket_portfolio",
"_____no_output_____"
]
],
[
[
"**Ticker Performance**",
"_____no_output_____"
]
],
[
[
"strat.calcStat(format_result = True)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0d9a71fbc0bb838c8fb0574b1df87f966bd3848 | 16,923 | ipynb | Jupyter Notebook | notebooks/20190315_Differentiable_Mathematical_Program_Demo.ipynb | gizatt/scene_generation | cd978b4fe8ac58983894db3fb93d625c85578dd6 | [
"MIT"
] | 5 | 2018-11-27T18:46:01.000Z | 2020-09-06T19:59:12.000Z | notebooks/20190315_Differentiable_Mathematical_Program_Demo.ipynb | gizatt/scene_generation | cd978b4fe8ac58983894db3fb93d625c85578dd6 | [
"MIT"
] | null | null | null | notebooks/20190315_Differentiable_Mathematical_Program_Demo.ipynb | gizatt/scene_generation | cd978b4fe8ac58983894db3fb93d625c85578dd6 | [
"MIT"
] | null | null | null | 83.364532 | 10,372 | 0.805354 | [
[
[
"%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nfrom __future__ import print_function\nimport math\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\nimport sys\nimport time\n\nfrom pydrake.solvers.mathematicalprogram import MathematicalProgram, Solve\nfrom pydrake.solvers.ipopt import IpoptSolver",
"_____no_output_____"
],
[
"mp = MathematicalProgram()\nxy = mp.NewContinuousVariables(2, \"xy\")\n\n#def constraint(xy):\n# return np.array([xy[0]*xy[0] + 2.0*xy[1]*xy[1]])\n#constraint_bounds = (np.array([0.]), np.array([1.]))\n#mp.AddConstraint(constraint, constraint_bounds[0], constraint_bounds[1], xy)\n\ndef constraint(xy):\n theta = 1.0\n return np.array([[np.cos(theta), -np.sin(theta)],\n [np.sin(theta), np.cos(theta)]]).dot(\n np.array([xy[0], xy[1]]))\nconstraint_bounds = (np.array([-0.5, -0.5]), np.array([0.5, 0.5]))\nmp.AddConstraint(constraint, constraint_bounds[0], constraint_bounds[1], xy)\n\ndef cost(xy):\n return xy[0]*1.0 + xy[1]*1.0\n\nmp.AddCost(cost, xy)\n#solver = IpoptSolver()\n#result = solver.Solve(mp, None, None)\nresult = Solve(mp)\nxystar = result.GetSolution()\nprint(\"Successful: \", result.is_success())\nprint(\"Solver: \", result.get_solver_id().name())\nprint(\"xystar: \", xystar)",
"Successful: True\nSolver: SNOPT/f2c\nxystar: [-0.15058434 -0.69088665]\n"
],
[
"# Demo of pulling costs / constraints from MathematicalProgram\n# and evaluating them / getting gradients.\nfrom pydrake.forwarddiff import gradient, jacobian\n\ncosts = mp.GetAllCosts()\ntotal_cost_gradient = np.zeros(xystar.shape)\nfor cost in costs:\n print(\"Cost: \", cost)\n print(\"Eval at xystar: \", cost.evaluator().Eval(xystar))\n grad = gradient(cost.evaluator().Eval, xystar)\n print(\"Gradient at xystar: \", grad)\n total_cost_gradient += grad\nconstraints = mp.GetAllConstraints()\ntotal_constraint_gradient = np.zeros(xystar.shape)\nfor constraint in constraints:\n print(\"Constraint: \", constraint)\n val = constraint.evaluator().Eval(xystar)\n print(\"Eval at xystar: \", val)\n jac = jacobian(constraint.evaluator().Eval, xystar)\n print(\"Gradient at xystar: \", jac)\n total_constraint_gradient -= (val <= constraint_bounds[0] + 1E-6).dot(jac)\n total_constraint_gradient += (val >= constraint_bounds[1] - 1E-6).dot(jac)\n\nif np.any(total_cost_gradient):\n total_cost_gradient /= np.linalg.norm(total_cost_gradient)\nif np.any(total_constraint_gradient):\n total_constraint_gradient /= np.linalg.norm(total_constraint_gradient)\nprint(\"Total cost grad dir: \", total_cost_gradient)\nprint(\"Total constraint grad dir: \", total_constraint_gradient)",
"Cost: <pydrake.solvers.mathematicalprogram.Binding_Cost object at 0x7f4e53967030>\nEval at xystar: [-0.84147098]\nGradient at xystar: [1. 1.]\nConstraint: <pydrake.solvers.mathematicalprogram.Binding_Constraint object at 0x7f4e53967bf0>\nEval at xystar: [ 0.5 -0.5]\nGradient at xystar: [[ 0.54030231 -0.84147098]\n [ 0.84147098 0.54030231]]\nTotal cost grad dir: [0.70710678 0.70710678]\nTotal constraint grad dir: [-0.21295842 -0.97706126]\n"
],
[
"# Draw feasible region\nx_bounds = [-2., 2.]\ny_bounds = [-2., 2.]\nn_pts = [200, 300]\nX, Y = np.meshgrid(np.linspace(x_bounds[0], x_bounds[1], n_pts[0]),\n np.linspace(y_bounds[0], y_bounds[1], n_pts[1]),\n indexing=\"ij\")\n\nvals = np.ones(n_pts)\nfor constraint in mp.GetAllConstraints():\n for i in range(n_pts[0]):\n for j in range(n_pts[1]):\n vals_here = constraint.evaluator().Eval(np.array([X[i, j], Y[i, j]]))\n vals[i, j] = (\n np.all(vals_here >= constraint.evaluator().lower_bound()) and\n np.all(vals_here <= constraint.evaluator().upper_bound())\n )\n \n\nplt.imshow(vals, extent=x_bounds+y_bounds)\narrow_cost = plt.arrow(\n xystar[0], xystar[1],\n total_cost_gradient[0]/2., total_cost_gradient[1]/2.,\n width=0.05, color=\"g\")\narrow_constraint = plt.arrow(\n xystar[0], xystar[1],\n total_constraint_gradient[0]/2., total_constraint_gradient[1]/2.,\n width=0.05, color=\"r\")\nplt.legend([arrow_cost, arrow_constraint, ], [\"Cost Increase Dir\", \"Constraint Violation Dir\"]);",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
d0d9ae2dc86e782d2439a2a917525aaf6cc080d0 | 43,229 | ipynb | Jupyter Notebook | notebooks/chap02.ipynb | anvaari/ThinkBayes2 | 5b68b8720d2e4b2c95d569d4c2552721b9d75395 | [
"MIT"
] | null | null | null | notebooks/chap02.ipynb | anvaari/ThinkBayes2 | 5b68b8720d2e4b2c95d569d4c2552721b9d75395 | [
"MIT"
] | null | null | null | notebooks/chap02.ipynb | anvaari/ThinkBayes2 | 5b68b8720d2e4b2c95d569d4c2552721b9d75395 | [
"MIT"
] | null | null | null | 29.588638 | 476 | 0.48294 | [
[
[
"# Bayes's Theorem",
"_____no_output_____"
],
[
"Think Bayes, Second Edition\n\nCopyright 2020 Allen B. Downey\n\nLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)",
"_____no_output_____"
],
[
"In the previous chapter, we derived Bayes's Theorem:\n\n$$P(A|B) = \\frac{P(A) P(B|A)}{P(B)}$$\n\nAs an example, we used data from the General Social Survey and Bayes's Theorem to compute conditional probabilities.\nBut since we had the complete dataset, we didn't really need Bayes's Theorem.\nIt was easy enough to compute the left side of the equation directly, and no easier to compute the right side.\n\nBut often we don't have a complete dataset, and in that case Bayes's Theorem is more useful. In this chapter, we'll use it to solve several more challenging problems related to conditional probability.",
"_____no_output_____"
],
[
"## The Cookie Problem\n\nWe'll start with a thinly disguised version of an [urn problem](https://en.wikipedia.org/wiki/Urn_problem):\n\n> Suppose there are two bowls of cookies.\n>\n> * Bowl 1 contains 30 vanilla cookies and 10 chocolate cookies. \n>\n> * Bowl 2 contains 20 vanilla cookies and 20 chocolate cookies.\n>\n> Now suppose you choose one of the bowls at random and, without looking, choose a cookie at random. If the cookie is vanilla, what is the probability that it came from Bowl 1?\n\nWhat we want is the conditional probability that we chose from Bowl 1 given that we got a vanilla cookie, $P(B_1 | V)$.\n\nBut what we get from the statement of the problem is:\n\n* The conditional probability of getting a vanilla cookie, given that we chose from Bowl 1, $P(V | B_1)$ and\n\n* The conditional probability of getting a vanilla cookie, given that we chose from Bowl 2, $P(V | B_2)$.\n",
"_____no_output_____"
],
[
"Bayes's Theorem tells us how they are related:\n\n$$P(B_1|V) = \\frac{P(B_1)~P(V|B_1)}{P(V)}$$\n\nThe term on the left is what we want. The terms on the right are:\n\n- $P(B_1)$, the probability that we chose Bowl 1,\n unconditioned by what kind of cookie we got. \n Since the problem says we chose a bowl at random, \n we assume $P(B_1) = 1/2$.\n\n- $P(V|B_1)$, the probability of getting a vanilla cookie\n from Bowl 1, which is 3/4.\n\n- $P(V)$, the probability of drawing a vanilla cookie from\n either bowl. ",
"_____no_output_____"
],
[
"To compute $P(V)$, we can use the law of total probability:\n\n$$P(V) = P(B_1)~P(V|B_1) ~+~ P(B_2)~P(V|B_2)$$\n\nPlugging in the numbers from the statement of the problem, we have\n\n$$P(V) = (1/2)~(3/4) ~+~ (1/2)~(1/2) = 5/8$$\n\nWe can also compute this result directly, like this: \n\n* Since we had an equal chance of choosing either bowl and the bowls contain the same number of cookies, we had the same chance of choosing any cookie. \n\n* Between the two bowls there are 50 vanilla and 30 chocolate cookies, so $P(V) = 5/8$.",
"_____no_output_____"
],
[
"Finally, we can apply Bayes's Theorem to compute the posterior probability of Bowl 1:\n\n$$P(B_1|V) = (1/2)~(3/4)~/~(5/8) = 3/5$$\n\nThis example demonstrates one use of Bayes's theorem: it provides a\nway to get from $P(B|A)$ to $P(A|B)$. \nThis strategy is useful in cases like this where it is easier to compute the terms on the right side than the term on the left.",
"_____no_output_____"
],
[
"## Diachronic Bayes\n\nThere is another way to think of Bayes's theorem: it gives us a way to\nupdate the probability of a hypothesis, $H$, given some body of data, $D$.\n\n**This interpretation is \"diachronic\", which means \"related to change over time\"; in this case, the probability of the hypotheses changes as we see new data.**\n\nRewriting Bayes's theorem with $H$ and $D$ yields:\n\n$$P(H|D) = \\frac{P(H)~P(D|H)}{P(D)}$$\n\nIn this interpretation, each term has a name:\n\n- $P(H)$ is the probability of the hypothesis before we see the data, called the prior probability, or just **prior**.\n\n- $P(H|D)$ is the probability of the hypothesis after we see the data, called the **posterior**.\n\n- $P(D|H)$ is the probability of the data under the hypothesis, called the **likelihood**.\n\n- $P(D)$ is the **total probability of the data**, under any hypothesis.\n\nSometimes we can compute the prior based on background information. For example, the cookie problem specifies that we choose a bowl at random with equal probability.\n\nIn other cases the prior is subjective; that is, reasonable people might disagree, either because they use different background information or because they interpret the same information differently.\n\nThe likelihood is usually the easiest part to compute. In the cookie\nproblem, we are given the number of cookies in each bowl, so we can compute the probability of the data under each hypothesis.",
"_____no_output_____"
],
[
"Computing the total probability of the data can be tricky. \nIt is supposed to be the probability of seeing the data under any hypothesis at all, but it can be hard to nail down what that means.\n\nMost often we simplify things by specifying a set of hypotheses that\nare:\n\n* Mutually exclusive, which means that only one of them can be true, and\n\n* Collectively exhaustive, which means one of them must be true.\n\nWhen these conditions apply, we can compute $P(D)$ using the law of total probability. For example, with two hypotheses, $H_1$ and $H_2$:\n\n$$P(D) = P(H_1)~P(D|H_1) + P(H_2)~P(D|H_2)$$\n\nAnd more generally, with any number of hypotheses:\n\n$$P(D) = \\sum_i P(H_i)~P(D|H_i)$$\n\nThe process in this section, using data and a prior probability to compute a posterior probability, is called a **Bayesian update**.",
"_____no_output_____"
],
[
"## Bayes Tables\n\nA convenient tool for doing a Bayesian update is a Bayes table.\nYou can write a Bayes table on paper or use a spreadsheet, but in this section I'll use a Pandas `DataFrame`.\n\nFirst I'll make empty `DataFrame` **with one row for each hypothesis**:",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\ntable = pd.DataFrame(index=['Bowl 1', 'Bowl 2'])",
"_____no_output_____"
]
],
[
[
"Now I'll add a column to represent the priors:",
"_____no_output_____"
]
],
[
[
"table['prior'] = 1/2, 1/2\ntable",
"_____no_output_____"
]
],
[
[
"And a column for the likelihoods:",
"_____no_output_____"
]
],
[
[
"table['likelihood'] = 3/4, 1/2\ntable",
"_____no_output_____"
]
],
[
[
"Here we see a difference from the previous method: we compute likelihoods for both hypotheses, not just Bowl 1:\n\n* The chance of getting a vanilla cookie from Bowl 1 is 3/4.\n\n* The chance of getting a vanilla cookie from Bowl 2 is 1/2.\n\nYou might notice that the likelihoods don't add up to 1. That's OK; each of them is a probability conditioned on a different hypothesis.\nThere's no reason they should add up to 1 and no problem if they don't.\n\nThe next step is similar to what we did with Bayes's Theorem; we multiply the priors by the likelihoods:",
"_____no_output_____"
]
],
[
[
"table['unnorm'] = table['prior'] * table['likelihood']\ntable",
"_____no_output_____"
]
],
[
[
"I call the result `unnorm` because these values are the **\"unnormalized posteriors\"**. Each of them is the product of a prior and a likelihood:\n\n$$P(B_i)~P(D|B_i)$$\n\nwhich is the numerator of Bayes's Theorem. \nIf we add them up, we have\n\n$$P(B_1)~P(D|B_1) + P(B_2)~P(D|B_2)$$\n\nwhich is the denominator of Bayes's Theorem, $P(D)$.\n\nSo we can compute the total probability of the data like this:",
"_____no_output_____"
]
],
[
[
"prob_data = table['unnorm'].sum()\nprob_data",
"_____no_output_____"
]
],
[
[
"Notice that we get 5/8, which is what we got by computing $P(D)$ directly.\n\nAnd we can compute the posterior probabilities like this:",
"_____no_output_____"
]
],
[
[
"table['posterior'] = table['unnorm'] / prob_data\ntable",
"_____no_output_____"
]
],
[
[
"The posterior probability for Bowl 1 is 0.6, which is what we got using Bayes's Theorem explicitly.\nAs a bonus, we also get the posterior probability of Bowl 2, which is 0.4.\n\nWhen we add up the unnormalized posteriors and divide through, we force the posteriors to add up to 1. **This process is called \"normalization\"**, which is why the total probability of the data is also called the \"normalizing constant\".",
"_____no_output_____"
],
[
"## The Dice Problem\n\nA Bayes table can also solve problems with more than two hypotheses. For example:\n\n> Suppose I have a box with a 6-sided die, an 8-sided die, and a 12-sided die. I choose one of the dice at random, roll it, and report that the outcome is a 1. What is the probability that I chose the 6-sided die?\n\nIn this example, there are three hypotheses with equal prior\nprobabilities. The data is my report that the outcome is a 1. \n\nIf I chose the 6-sided die, the probability of the data is\n1/6. If I chose the 8-sided die, the probability is 1/8, and if I chose the 12-sided die, it's 1/12.\n\nHere's a Bayes table that uses integers to represent the hypotheses:",
"_____no_output_____"
]
],
[
[
"table2 = pd.DataFrame(index=[6, 8, 12])",
"_____no_output_____"
]
],
[
[
"I'll use fractions to represent the prior probabilities and the likelihoods. That way they don't get rounded off to floating-point numbers.",
"_____no_output_____"
]
],
[
[
"from fractions import Fraction\n\ntable2['prior'] = Fraction(1, 3)\ntable2['likelihood'] = Fraction(1, 6), Fraction(1, 8), Fraction(1, 12)\ntable2",
"_____no_output_____"
]
],
[
[
"Once you have priors and likelhoods, the remaining steps are always the same, so I'll put them in a function:",
"_____no_output_____"
]
],
[
[
"def update(table):\n \"\"\"Compute the posterior probabilities.\"\"\"\n table['unnorm'] = table['prior'] * table['likelihood']\n prob_data = table['unnorm'].sum()\n table['posterior'] = table['unnorm'] / prob_data\n return prob_data",
"_____no_output_____"
]
],
[
[
"And call it like this.",
"_____no_output_____"
]
],
[
[
"prob_data = update(table2)",
"_____no_output_____"
]
],
[
[
"Here is the final Bayes table:",
"_____no_output_____"
]
],
[
[
"table2",
"_____no_output_____"
]
],
[
[
"The posterior probability of the 6-sided die is 4/9, which is a little more than the probabilities for the other dice, 3/9 and 2/9.\nIntuitively, the 6-sided die is the most likely because it had the highest likelihood of producing the outcome we saw.",
"_____no_output_____"
],
[
"## The Monty Hall Problem\n\nNext we'll use a Bayes table to solve one of the most contentious problems in probability.\n\nThe Monty Hall problem is based on a game show called *Let's Make a Deal*. If you are a contestant on the show, here's how the game works:\n\n* The host, Monty Hall, shows you three closed doors -- numbered 1, 2, and 3 -- and tells you that there is a prize behind each door.\n\n* One prize is valuable (traditionally a car), the other two are less valuable (traditionally goats).\n\n* The object of the game is to guess which door has the car. If you guess right, you get to keep the car.\n\nSuppose you pick Door 1. Before opening the door you chose, Monty opens Door 3 and reveals a goat. Then Monty offers you the option to stick with your original choice or switch to the remaining unopened door.",
"_____no_output_____"
],
[
"To maximize your chance of winning the car, should you stick with Door 1 or switch to Door 2?\n\nTo answer this question, we have to make some assumptions about the behavior of the host:\n\n1. Monty always opens a door and offers you the option to switch.\n\n2. He never opens the door you picked or the door with the car.\n\n3. If you choose the door with the car, he chooses one of the other\n doors at random.\n\nUnder these assumptions, you are better off switching. \nIf you stick, you win $1/3$ of the time. If you switch, you win $2/3$ of the time.\n\nIf you have not encountered this problem before, you might find that\nanswer surprising. You would not be alone; many people have the strong\nintuition that it doesn't matter if you stick or switch. There are two\ndoors left, they reason, so the chance that the car is behind Door 1 is 50%. But that is wrong.\n\nTo see why, it can help to use a Bayes table. We start with three\nhypotheses: the car might be behind Door 1, 2, or 3. According to the\nstatement of the problem, the prior probability for each door is 1/3.",
"_____no_output_____"
]
],
[
[
"table3 = pd.DataFrame(index=['Door 1', 'Door 2', 'Door 3'])\ntable3['prior'] = Fraction(1, 3)\ntable3",
"_____no_output_____"
]
],
[
[
"The data is that Monty opened Door 3 and revealed a goat. So let's\nconsider the probability of the data under each hypothesis:\n\n* If the car is behind Door 1, Monty chooses Door 2 or 3 at random, so the probability he opens Door 3 is $1/2$.\n\n* If the car is behind Door 2, Monty has to open Door 3, so the probability of the data under this hypothesis is 1.\n\n* If the car is behind Door 3, Monty does not open it, so the probability of the data under this hypothesis is 0.\n\nHere are the likelihoods. ",
"_____no_output_____"
]
],
[
[
"table3['likelihood'] = Fraction(1, 2), 1, 0\ntable3",
"_____no_output_____"
]
],
[
[
"Now that we have priors and likelihoods, we can use `update` to compute the posterior probabilities.",
"_____no_output_____"
]
],
[
[
"update(table3)\ntable3",
"_____no_output_____"
]
],
[
[
"After Monty opens Door 3, the posterior probability of Door 1 is $1/3$;\nthe posterior probability of Door 2 is $2/3$.\nSo you are better off switching from Door 1 to Door 2.",
"_____no_output_____"
],
[
"As this example shows, our intuition for probability is not always\nreliable. \nBayes's Theorem can help by providing a divide-and-conquer strategy:\n\n1. First, write down the hypotheses and the data.\n\n2. Next, figure out the prior probabilities.\n\n3. Finally, compute the likelihood of the data under each hypothesis.\n\nThe Bayes table does the rest.",
"_____no_output_____"
],
[
"## Summary\n\nIn this chapter we solved the Cookie Problem using Bayes's theorem explicitly and using a Bayes table.\nThere's no real difference between these methods, but the Bayes table can make it easier to compute the total probability of the data, especially for problems with more than two hypotheses.\n\nThen we solved the Dice Problem, which we will see again in the next chapter, and the Monty Hall problem, which you might hope you never see again.\n\nIf the Monty Hall problem makes your head hurt, you are not alone. But I think it demonstrates the power of Bayes's Theorem as a divide-and-conquer strategy for solving tricky problems. And I hope it provides some insight into *why* the answer is what it is.\n\nWhen Monty opens a door, **he provides information we can use to update our belief about the location of the car**. Part of the information is obvious. If he opens Door 3, we know the car is not behind Door 3. But part of the information is more subtle. Opening Door 3 is more likely if the car is behind Door 2, and less likely if it is behind Door 1. So the data is evidence in favor of Door 2. We will come back to this notion of evidence in future chapters.\n\nIn the next chapter we'll extend the Cookie Problem and the Dice Problem, and take the next step from basic probability to Bayesian statistics.\n\nBut first, you might want to work on the exercises.",
"_____no_output_____"
],
[
"## Exercises",
"_____no_output_____"
],
[
"**Exercise:** Suppose you have two coins in a box.\nOne is a normal coin with heads on one side and tails on the other, and one is a trick coin with heads on both sides. You choose a coin at random and see that one of the sides is heads.\nWhat is the probability that you chose the trick coin?",
"_____no_output_____"
]
],
[
[
"# Solution goes here\ncoins=pd.DataFrame(index=['Tricky','Normal'])\ncoins['prior']=Fraction(1,2)\ncoins['likelihood']=1,Fraction(1,2)\nupdate(coins)\ncoins.loc['Tricky','posterior']",
"_____no_output_____"
]
],
[
[
"<div style=\"font-size: 14px;font-family:Time\" >\n\n**My Answer was right!** ✅\n</div>\n",
"_____no_output_____"
],
[
"**Exercise:** Suppose you meet someone and learn that they have two children.\nYou ask if either child is a girl and they say yes.\nWhat is the probability that both children are girls?\n\nHint: Start with four equally likely hypotheses.",
"_____no_output_____"
]
],
[
[
"# Solution goes here",
"_____no_output_____"
]
],
[
[
"<div style=\"font-size: 14px;font-family:Time\" >\n\n**My Answer was wrong!** ❎ :\nI don't understand problem correctly.\n</div>\n",
"_____no_output_____"
],
[
"**Exercise:** There are many variations of the [Monty Hall problem](https://en.wikipedia.org/wiki/Monty_Hall_problem). \nFor example, suppose Monty always chooses Door 2 if he can, and\nonly chooses Door 3 if he has to (because the car is behind Door 2).\n\nIf you choose Door 1 and Monty opens Door 2, what is the probability the car is behind Door 3?\n\nIf you choose Door 1 and Monty opens Door 3, what is the probability the car is behind Door 2?",
"_____no_output_____"
]
],
[
[
"# Solution goes here\nmonty_1=pd.DataFrame(index=['Door 1','Door 2','Door 3'])\nmonty_1['prior']=Fraction(1,3)\nmonty_1['likelihood']=1,0,1\nupdate(monty_1)\nmonty_1.loc['Door 3','posterior']",
"_____no_output_____"
]
],
[
[
"<div style=\"font-size: 14px;font-family:Time\" >\n\n**My Answer was right!** ✅\n</div>\n",
"_____no_output_____"
],
[
"# Solution goes here\nmonty_2=pd.DataFrame(index=['Door 1','Door 2','Door 3'])\nmonty_2['prior']=Fraction(1,3)\nmonty_2['likelihood']=0,1,0\nupdate(monty_2)\nmonty_2.loc['Door 2','posterior']\nmonty_2",
"_____no_output_____"
],
[
"<div style=\"font-size: 14px;font-family:Time\" >\n\n**My Answer was right!** ✅\n</div>\n",
"_____no_output_____"
],
[
"**Exercise:** M&M's are small candy-coated chocolates that come in a variety of colors. \nMars, Inc., which makes M&M's, changes the mixture of colors from time to time.\nIn 1995, they introduced blue M&M's. \n\n* In 1994, the color mix in a bag of plain M&M's was 30\\% Brown, 20\\% Yellow, 20\\% Red, 10\\% Green, 10\\% Orange, 10\\% Tan. \n\n* In 1996, it was 24\\% Blue , 20\\% Green, 16\\% Orange, 14\\% Yellow, 13\\% Red, 13\\% Brown.\n\nSuppose a friend of mine has two bags of M&M's, and he tells me\nthat one is from 1994 and one from 1996. He won't tell me which is\nwhich, but he gives me one M&M from each bag. One is yellow and\none is green. What is the probability that the yellow one came\nfrom the 1994 bag?\n\nHint: The trick to this question is to define the hypotheses and the data carefully.",
"_____no_output_____"
]
],
[
[
"# Solution goes here\nMandM=pd.DataFrame(index=['From 1994','From 1996'])\nMandM['prior']=Fraction(1,2)\nMandM['likelihood']=Fraction(2,10),Fraction(13,100)\nupdate(MandM)\nMandM.loc['From 1994','posterior']\nMandM\n",
"_____no_output_____"
]
],
[
[
"<div style=\"font-size: 14px;font-family:Time\" >\n\n**My Answer was wrong!** ❎ :\nI should pay attention to this fact, when we calculate probability that yellow came from 1994, green must be from 1996.\nAnd I shall recalculate likelihood\n</div>\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0d9b028df02b23b861e1c469d134a024d5fc142 | 239,359 | ipynb | Jupyter Notebook | notebooks/Using the Predictive Model.ipynb | AldisiRana/SE_KGE | 876f51081016d06cbb9d1a2d220f7fe4756b21aa | [
"MIT"
] | 1 | 2019-04-15T13:26:29.000Z | 2019-04-15T13:26:29.000Z | notebooks/Using the Predictive Model.ipynb | AldisiRana/SE_KGE | 876f51081016d06cbb9d1a2d220f7fe4756b21aa | [
"MIT"
] | 17 | 2019-04-19T19:31:33.000Z | 2019-09-02T13:25:53.000Z | notebooks/Using the Predictive Model.ipynb | AldisiRana/SE_KGE | 876f51081016d06cbb9d1a2d220f7fe4756b21aa | [
"MIT"
] | 1 | 2019-07-19T12:10:33.000Z | 2019-07-19T12:10:33.000Z | 35.539569 | 142 | 0.343672 | [
[
[
"# Using the Prediction Model\n\n## Environment",
"_____no_output_____"
]
],
[
[
"import getpass\nimport json\nimport os\nimport sys\nimport time\n\nimport pandas as pd\nfrom tqdm import tqdm_notebook as tqdm\n\nfrom seffnet.constants import (\n DEFAULT_EMBEDDINGS_PATH, DEFAULT_GRAPH_PATH,\n DEFAULT_MAPPING_PATH, DEFAULT_PREDICTIVE_MODEL_PATH,\n RESOURCES\n)\nfrom seffnet.literature import query_europe_pmc",
"_____no_output_____"
],
[
"print(sys.version)",
"3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)]\n"
],
[
"print(time.asctime())",
"Fri Jan 3 12:12:03 2020\n"
],
[
"print(getpass.getuser())",
"aldis\n"
]
],
[
[
"# Loading the Data",
"_____no_output_____"
]
],
[
[
"from seffnet.default_predictor import predictor",
"_____no_output_____"
],
[
"print(f\"\"\"Loaded default predictor using paths:\n\nembeddings: {DEFAULT_EMBEDDINGS_PATH}\ngraph: {DEFAULT_GRAPH_PATH}\nmodel: {DEFAULT_PREDICTIVE_MODEL_PATH}\nmapping: {DEFAULT_MAPPING_PATH}\n\"\"\")",
"Loaded default predictor using paths:\n\nembeddings: c:\\users\\aldis\\documents\\github\\seffnet\\resources\\embeddings\\0812_weighted_node2vec_emb.embeddings\ngraph: c:\\users\\aldis\\documents\\github\\seffnet\\resources\\basic_graphs\\fullgraph_with_chemsim.edgelist\nmodel: c:\\users\\aldis\\documents\\github\\seffnet\\resources\\predictive_models\\0812_weighted_node2vec_predictive_model.pkl\nmapping: c:\\users\\aldis\\documents\\github\\seffnet\\resources\\mapping\\fullgraph_nodes_mapping.tsv\n\n"
]
],
[
[
"# Examples of different kinds of predictions with literature evidence",
"_____no_output_____"
],
[
"## side effect - target association",
"_____no_output_____"
]
],
[
[
"r = predictor.find_new_relation(\n source_name='EGFR_HUMAN',\n target_name='Papulopustular rash',\n)\nprint(json.dumps(r, indent=2))\n#PMID: 18165622",
"{\n \"source\": {\n \"node_id\": \"9587\",\n \"namespace\": \"uniprot\",\n \"identifier\": \"P00533\",\n \"name\": \"EGFR_HUMAN\",\n \"entity_type\": \"target\"\n },\n \"target\": {\n \"node_id\": \"6791\",\n \"namespace\": \"umls\",\n \"identifier\": \"C2609319\",\n \"name\": \"Papulopustular rash\",\n \"entity_type\": \"phenotype\"\n },\n \"lor\": 0.62433\n}\n"
],
[
"r = predictor.find_new_relation(\n source_id='9451', # Histamine receptor H1\n target_id='331', # Drowsiness\n)\nprint(json.dumps(r, indent=2))\n#PMID: 26626077",
"{\n \"source\": {\n \"node_id\": \"9451\",\n \"namespace\": \"uniprot\",\n \"identifier\": \"P35367\",\n \"name\": \"HRH1_HUMAN\",\n \"entity_type\": \"target\"\n },\n \"target\": {\n \"node_id\": \"331\",\n \"namespace\": \"umls\",\n \"identifier\": \"C0013144\",\n \"name\": \"Drowsiness\",\n \"entity_type\": \"phenotype\"\n },\n \"lor\": 0.08185\n}\n"
],
[
"r = predictor.find_new_relation(\n source_id='9325', # SC6A2\n target_id='56', # Tachycardia\n)\nprint(json.dumps(r, indent=2))\n#PMID: 30952858",
"{\n \"source\": {\n \"node_id\": \"9325\",\n \"namespace\": \"uniprot\",\n \"identifier\": \"P23975\",\n \"name\": \"SC6A2_HUMAN\",\n \"entity_type\": \"target\"\n },\n \"target\": {\n \"node_id\": \"56\",\n \"namespace\": \"umls\",\n \"identifier\": \"C0039231\",\n \"name\": \"Tachycardia\",\n \"entity_type\": \"phenotype\"\n },\n \"lor\": 0.52279\n}\n"
],
[
"r = predictor.find_new_relation(\n source_id='8670', # ACES_HUMAN\n target_id='309', # Bradycardia\n)\nprint(json.dumps(r, indent=2))\n#PMID: 30952858",
"{\n \"source\": {\n \"node_id\": \"8670\",\n \"namespace\": \"uniprot\",\n \"identifier\": \"P22303\",\n \"name\": \"ACES_HUMAN\",\n \"entity_type\": \"target\"\n },\n \"target\": {\n \"node_id\": \"309\",\n \"namespace\": \"umls\",\n \"identifier\": \"C0428977\",\n \"name\": \"Bradycardia\",\n \"entity_type\": \"phenotype\"\n },\n \"lor\": 0.85649\n}\n"
]
],
[
[
"## drug- side effect association",
"_____no_output_____"
]
],
[
[
"r = predictor.find_new_relation(\n source_id='3534', # diazepam\n target_id='670', # Libido decreased\n)\nprint(json.dumps(r, indent=2))\n#PMID: 29888057",
"{\n \"source\": {\n \"node_id\": \"3534\",\n \"namespace\": \"pubchem.compound\",\n \"identifier\": \"3016\",\n \"name\": \"Diazepam\",\n \"entity_type\": \"approved drug\"\n },\n \"target\": {\n \"node_id\": \"670\",\n \"namespace\": \"umls\",\n \"identifier\": \"C0011124\",\n \"name\": \"Libido decreased\",\n \"entity_type\": \"phenotype\"\n },\n \"lor\": 0.00453\n}\n"
],
[
"r = predictor.find_new_relation(\n source_id='1148', # Cytarabine \n target_id='1149', # Anaemia megaloblastic\n)\nprint(json.dumps(r, indent=2))\n# PMID: 23157436",
"{\n \"source\": {\n \"node_id\": \"1148\",\n \"namespace\": \"pubchem.compound\",\n \"identifier\": \"6175\",\n \"name\": \"Cytidine\",\n \"entity_type\": \"experimental drug\"\n },\n \"target\": {\n \"node_id\": \"1149\",\n \"namespace\": \"umls\",\n \"identifier\": \"C0002888\",\n \"name\": \"Anaemia megaloblastic\",\n \"entity_type\": \"phenotype\"\n },\n \"lor\": 0.22571\n}\n"
]
],
[
[
"## drug-target association",
"_____no_output_____"
]
],
[
[
"r = predictor.find_new_relation(\n source_id='14672', # Sertindole \n target_id='9350', # CHRM1 receptor\n)\nprint(json.dumps(r, indent=2))\n# PMID: 29942259 ",
"{\n \"source\": {\n \"node_id\": \"14672\",\n \"namespace\": \"pubchem.compound\",\n \"identifier\": \"60149\",\n \"name\": \"Sertindole\",\n \"entity_type\": \"approved drug\"\n },\n \"target\": {\n \"node_id\": \"9350\",\n \"namespace\": \"uniprot\",\n \"identifier\": \"P11229\",\n \"name\": \"ACM1_HUMAN\",\n \"entity_type\": \"target\"\n },\n \"lor\": 0.01543\n}\n"
]
],
[
[
"# Example of predicting relations using node2vec model and embeddings",
"_____no_output_____"
]
],
[
[
"def get_predictions_df(curie, results_type=None):\n results = predictor.find_new_relations(\n node_curie=curie,\n results_type=results_type,\n k=50,\n )\n results_df = pd.DataFrame(results['predictions'])\n results_df = results_df[['node_id', 'namespace', 'identifier', 'name', 'lor', 'novel']]\n return results['query'], results_df",
"_____no_output_____"
],
[
"query, df = get_predictions_df('pubchem.compound:2159', 'phenotype')\nprint(json.dumps(query, indent=2))\ndf",
"{\n \"entity\": {\n \"node_id\": \"2173\",\n \"namespace\": \"pubchem.compound\",\n \"identifier\": \"2159\",\n \"name\": \"Amisulpride\",\n \"entity_type\": \"approved drug\"\n },\n \"k\": 50,\n \"type\": \"phenotype\"\n}\n"
],
[
"query, df = get_predictions_df('pubchem.compound:4585', 'phenotype')\nprint(json.dumps(query, indent=2))\ndf",
"{\n \"entity\": {\n \"node_id\": \"4915\",\n \"namespace\": \"pubchem.compound\",\n \"identifier\": \"4585\",\n \"name\": \"Olanzapine\",\n \"entity_type\": \"approved drug\"\n },\n \"k\": 50,\n \"type\": \"phenotype\"\n}\n"
],
[
"query, df = get_predictions_df('uniprot:P08172', 'phenotype')\nprint(json.dumps(query, indent=2))\ndf",
"{\n \"entity\": {\n \"node_id\": \"9429\",\n \"namespace\": \"uniprot\",\n \"identifier\": \"P08172\",\n \"name\": \"ACM2_HUMAN\",\n \"entity_type\": \"target\"\n },\n \"k\": 50,\n \"type\": \"phenotype\"\n}\n"
],
[
"query, df = get_predictions_df('uniprot:P08588', 'phenotype')\nprint(json.dumps(query, indent=2))\ndf",
"{\n \"entity\": {\n \"node_id\": \"8733\",\n \"namespace\": \"uniprot\",\n \"identifier\": \"P08588\",\n \"name\": \"ADRB1_HUMAN\",\n \"entity_type\": \"target\"\n },\n \"k\": 50,\n \"type\": \"phenotype\"\n}\n"
],
[
"query, df = get_predictions_df('uniprot:P22303', 'phenotype')\nprint(json.dumps(query, indent=2))\ndf",
"{\n \"entity\": {\n \"node_id\": \"8670\",\n \"namespace\": \"uniprot\",\n \"identifier\": \"P22303\",\n \"name\": \"ACES_HUMAN\",\n \"entity_type\": \"target\"\n },\n \"k\": 50,\n \"type\": \"phenotype\"\n}\n"
],
[
"query, df = get_predictions_df('uniprot:Q9UBN7', 'chemical')\nprint(json.dumps(query, indent=2))\ndf",
"{\n \"entity\": {\n \"node_id\": \"12164\",\n \"namespace\": \"uniprot\",\n \"identifier\": \"Q9UBN7\",\n \"name\": \"HDAC6_HUMAN\",\n \"entity_type\": \"target\"\n },\n \"k\": 50,\n \"type\": \"chemical\"\n}\n"
],
[
"query, df = get_predictions_df(\"umls:C0030567\", 'chemical')\nprint(json.dumps(query, indent=2))\ndf",
"{\n \"entity\": {\n \"node_id\": \"2248\",\n \"namespace\": \"umls\",\n \"identifier\": \"C0030567\",\n \"name\": \"Parkinson's disease\",\n \"entity_type\": \"phenotype\"\n },\n \"k\": 50,\n \"type\": \"chemical\"\n}\n"
],
[
"results = []\nfor ind, row in df.iterrows():\n pmcid = []\n lit = query_europe_pmc(\n query_entity=row['name'],\n target_entities=[\n 'umls:C0030567'\n ],\n )\n i = 0\n for x in lit:\n if i > 7:\n pmcid.append('... ect')\n lit.close()\n break\n pmcid.append(x['pmcid'])\n i+=1\n results.append((len(pmcid), pmcid))\ndf['co-occurance'] = results",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"df.to_csv(os.path.join(RESOURCES, 'parkinsons-chemicals.tsv'), sep='\\t')",
"_____no_output_____"
],
[
"query, df = get_predictions_df('umls:C0242422', 'chemical')\nprint(json.dumps(query, indent=2))\ndf",
"{\n \"entity\": {\n \"node_id\": \"852\",\n \"namespace\": \"umls\",\n \"identifier\": \"C0242422\",\n \"name\": \"Parkinsonism\",\n \"entity_type\": \"phenotype\"\n },\n \"k\": 50,\n \"type\": \"chemical\"\n}\n"
],
[
"query, df = get_predictions_df('pubchem.compound:5095', 'phenotype')\nprint(json.dumps(query, indent=2))\ndf\n#PMID: 29241812",
"{\n \"entity\": {\n \"node_id\": \"5346\",\n \"namespace\": \"pubchem.compound\",\n \"identifier\": \"5095\",\n \"name\": \"Ropinirole\",\n \"entity_type\": \"approved drug\"\n },\n \"k\": 30,\n \"type\": \"phenotype\"\n}\n"
],
[
"r = predictor.find_new_relation(\n source_id='2071', #Amantadine\n target_id='2248', #Parkinson's disease\n)\nprint(json.dumps(r, indent=2))\n#PMID: 21654146",
"{\n \"source\": {\n \"node_id\": \"2071\",\n \"namespace\": \"pubchem.compound\",\n \"identifier\": \"2130\",\n \"name\": \"Amantadine\",\n \"entity_type\": \"approved drug\"\n },\n \"target\": {\n \"node_id\": \"2248\",\n \"namespace\": \"umls\",\n \"identifier\": \"C0030567\",\n \"name\": \"Parkinson's disease\",\n \"entity_type\": \"phenotype\"\n },\n \"lor\": 0.00478\n}\n"
],
[
"r = predictor.find_new_relation(\n source_id='5346', #Ropinirole\n target_id='1348', #Restless legs syndrome\n)\nprint(json.dumps(r, indent=2))\n#PMID: 21654146",
"{\n \"source\": {\n \"node_id\": \"5346\",\n \"namespace\": \"pubchem.compound\",\n \"identifier\": \"5095\",\n \"name\": \"Ropinirole\",\n \"entity_type\": \"approved drug\"\n },\n \"target\": {\n \"node_id\": \"1348\",\n \"namespace\": \"umls\",\n \"identifier\": \"C0035258\",\n \"name\": \"Restless legs syndrome\",\n \"entity_type\": \"phenotype\"\n },\n \"lor\": 0.00667\n}\n"
],
[
"r = predictor.find_new_relation(\n source_id='3627', #Disulfiram\n target_id='2318', #Malignant melanoma\n)\nprint(json.dumps(r, indent=2))\n#PMID: 21654146",
"{\n \"source\": {\n \"node_id\": \"3627\",\n \"namespace\": \"pubchem.compound\",\n \"identifier\": \"3117\",\n \"name\": \"Disulfiram\",\n \"entity_type\": \"approved drug\"\n },\n \"target\": {\n \"node_id\": \"2318\",\n \"namespace\": \"umls\",\n \"identifier\": \"C0025202\",\n \"name\": \"Malignant melanoma\",\n \"entity_type\": \"phenotype\"\n },\n \"lor\": 0.51121\n}\n"
],
[
"r = predictor.find_new_relation(\n source_id='17528', #Brigatinib\n target_id='5148', #Colorectal cancer\n)\nprint(json.dumps(r, indent=2))\n#PMID: 31410188",
"{\n \"source\": {\n \"node_id\": \"17528\",\n \"namespace\": \"uniprot\",\n \"identifier\": \"Q99640\",\n \"name\": \"PMYT1_HUMAN\",\n \"entity_type\": \"target\"\n },\n \"target\": {\n \"node_id\": \"5148\",\n \"namespace\": \"umls\",\n \"identifier\": \"C1527249\",\n \"name\": \"Colorectal cancer\",\n \"entity_type\": \"phenotype\"\n },\n \"lor\": 0.8214\n}\n"
],
[
"r = predictor.find_new_relation(\n source_id='6995', #dasatinib\n target_id='1179', #Diffuse large B-cell lymphoma\n)\nprint(json.dumps(r, indent=2))\n#PMID: 31383760",
"{\n \"source\": {\n \"node_id\": \"6995\",\n \"namespace\": \"pubchem.compound\",\n \"identifier\": \"3062316\",\n \"name\": \"Dasatinib\",\n \"entity_type\": \"approved drug\"\n },\n \"target\": {\n \"node_id\": \"1179\",\n \"namespace\": \"umls\",\n \"identifier\": \"C0079744\",\n \"name\": \"Diffuse large B-cell lymphoma\",\n \"entity_type\": \"phenotype\"\n },\n \"lor\": 0.83577\n}\n"
],
[
"r = predictor.find_new_relation(\n source_id='5265', #ribavirin\n target_id='947', #Candida infection\n)\nprint(json.dumps(r, indent=2))\n#PMID: 31307986",
"{\n \"source\": {\n \"node_id\": \"5265\",\n \"namespace\": \"pubchem.compound\",\n \"identifier\": \"37542\",\n \"name\": \"Ribavirin\",\n \"entity_type\": \"approved drug\"\n },\n \"target\": {\n \"node_id\": \"947\",\n \"namespace\": \"umls\",\n \"identifier\": \"C0006840\",\n \"name\": \"Candida infection\",\n \"entity_type\": \"phenotype\"\n },\n \"lor\": 0.12888\n}\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0d9beef3244d35a45d054cffdb9d21c39785eff | 30,904 | ipynb | Jupyter Notebook | lab2/Lab_02_Data_Preprocessing.ipynb | dnhirapara/049_DarshikHirapara | 90921f4c58dd44668953cc94d853933e4d07be83 | [
"MIT"
] | null | null | null | lab2/Lab_02_Data_Preprocessing.ipynb | dnhirapara/049_DarshikHirapara | 90921f4c58dd44668953cc94d853933e4d07be83 | [
"MIT"
] | null | null | null | lab2/Lab_02_Data_Preprocessing.ipynb | dnhirapara/049_DarshikHirapara | 90921f4c58dd44668953cc94d853933e4d07be83 | [
"MIT"
] | null | null | null | 71.869767 | 15,462 | 0.701657 | [
[
[
"<a href=\"https://colab.research.google.com/github/dnhirapara/049_DarshikHirapara/blob/main/lab2/Lab_02_Data_Preprocessing.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Mounted at /content/drive\n"
],
[
"import pandas as pd\nimport numpy as np \nimport seaborn as sns\nfrom sklearn.preprocessing import MinMaxScaler, StandardScaler\nfrom sklearn.impute import SimpleImputer \nfrom sklearn.preprocessing import LabelEncoder",
"_____no_output_____"
],
[
"cardata_df = pd.read_csv('/content/drive/MyDrive/ML_Labs/lab2/exercise-car-data.csv', index_col=[0])\nprint(\"\\nData :\\n\",cardata_df)\n\nprint(\"\\nData statistics\\n\",cardata_df.describe())",
"\nData :\n Price Age KM FuelType ... Automatic CC Doors Weight\nIndex ... \n0 13500 23.0 46986 Diesel ... 0 2000 three 1165\n1 13750 23.0 72937 Diesel ... 0 2000 3 1165\n2 13950 24.0 41711 Diesel ... 0 2000 3 1165\n3 14950 26.0 48000 Diesel ... 0 2000 3 1165\n4 13750 30.0 38500 Diesel ... 0 2000 3 1170\n... ... ... ... ... ... ... ... ... ...\n1431 7500 NaN 20544 Petrol ... 0 1300 3 1025\n1432 10845 72.0 ?? Petrol ... 0 1300 3 1015\n1433 8500 NaN 17016 Petrol ... 0 1300 3 1015\n1434 7250 70.0 ?? NaN ... 0 1300 3 1015\n1435 6950 76.0 1 Petrol ... 0 1600 5 1114\n\n[1436 rows x 10 columns]\n\nData statistics\n Price Age ... CC Weight\ncount 1436.000000 1336.000000 ... 1436.000000 1436.00000\nmean 10730.824513 55.672156 ... 1566.827994 1072.45961\nstd 3626.964585 18.589804 ... 187.182436 52.64112\nmin 4350.000000 1.000000 ... 1300.000000 1000.00000\n25% 8450.000000 43.000000 ... 1400.000000 1040.00000\n50% 9900.000000 60.000000 ... 1600.000000 1070.00000\n75% 11950.000000 70.000000 ... 1600.000000 1085.00000\nmax 32500.000000 80.000000 ... 2000.000000 1615.00000\n\n[8 rows x 6 columns]\n"
],
[
"cardata_df.dropna(how='all',inplace=True)\nprint(cardata_df.dtypes)\n\n# All rows, all columns except last \nnew_X = cardata_df.iloc[:, :-1].values\n\n# Only last column \nnew_Y = cardata_df.iloc[:, -1].values \n\n#FuelType\nnew_X[:,3]=new_X[:,3].astype('str')\nle = LabelEncoder()\nnew_X[ : ,3] = le.fit_transform(new_X[ : ,3])\n\nprint(\"\\n\\nInput before imputation : \\n\\n\", new_X)",
"Price int64\nAge float64\nKM object\nFuelType object\nHP object\nMetColor float64\nAutomatic int64\nCC int64\nDoors object\nWeight int64\ndtype: object\n\n\nInput before imputation : \n\n [[13500 23.0 '46986' ... 0 2000 'three']\n [13750 23.0 '72937' ... 0 2000 '3']\n [13950 24.0 '41711' ... 0 2000 '3']\n ...\n [8500 nan '17016' ... 0 1300 '3']\n [7250 70.0 '??' ... 0 1300 '3']\n [6950 76.0 '1' ... 0 1600 '5']]\n"
],
[
"str_to_num_dictionary={\"zero\":0,\"one\":1,\"two\":2,\"three\":3,\"four\":4,\"five\":5,\"six\":6,\"seven\":7,\"eight\":8,\"nune\":9,\"ten\":10}\n\n# 3b. Imputation (Replacing null values with mean value of that attribute)\n#for col-3\nfor i in range(new_X[:,3].size):\n #KM\n if new_X[i,2]==\"??\":\n new_X[i,2]=np.nan\n #HP\n if new_X[i,4]==\"????\":\n new_X[i,4]=np.nan\n #Doors\n temp_str = str(new_X[i,8])\n if temp_str.isnumeric():\n new_X[i,8]=int(temp_str)\n else:\n new_X[i,8]=str_to_num_dictionary[temp_str]\n# Using Imputer function to replace NaN values with mean of that parameter value \nimputer = SimpleImputer(missing_values = np.nan,strategy = \"mean\")\nmode_imputer = SimpleImputer(missing_values = np.nan,strategy = \"most_frequent\")\n\n# Fitting the data, function learns the stats \nthe_imputer = imputer.fit(new_X[:, 0:3])\n# fit_transform() will execute those stats on the input ie. X[:, 1:3] \nnew_X[:, 0:3] = the_imputer.transform(new_X[:, 0:3])\n\n# Fitting the data, function learns the stats \nthe_mode_imputer = mode_imputer.fit(new_X[:, 3:4]) \nnew_X[:, 3:4] = the_mode_imputer.transform(new_X[:, 3:4])\n\n# Fitting the data, function learns the stats \nthe_imputer = imputer.fit(new_X[:, 4:5])\nnew_X[:, 4:5] = the_imputer.transform(new_X[:, 4:5])\n\n# Fitting the data, function learns the stats \nthe_mode_imputer = mode_imputer.fit(new_X[:, 5:6]) \nnew_X[:, 5:6] = the_mode_imputer.transform(new_X[:, 5:6])\n\n# filling the missing value with mean \nprint(\"\\n\\nNew Input with Mean Value for NaN : \\n\\n\", new_X)",
"\n\nNew Input with Mean Value for NaN : \n\n [[13500.0 23.0 46986.0 ... 0 2000 3]\n [13750.0 23.0 72937.0 ... 0 2000 3]\n [13950.0 24.0 41711.0 ... 0 2000 3]\n ...\n [8500.0 55.67215568862275 17016.0 ... 0 1300 3]\n [7250.0 70.0 68647.23997185081 ... 0 1300 3]\n [6950.0 76.0 1.0 ... 0 1600 5]]\n"
],
[
"new_data_df = pd.DataFrame(new_X,columns=cardata_df.columns[:-1])\nnew_data_df = new_data_df.astype(float)\nnew_data_df.dtypes",
"_____no_output_____"
],
[
"#feature selection\ncorr = new_data_df.corr()\nprint(corr.head())\nsns.heatmap(corr)",
" Price Age KM ... Automatic CC Doors\nPrice 1.000000 -0.845111 -0.565016 ... 0.033081 0.165067 0.185326\nAge -0.845111 1.000000 0.495199 ... 0.030931 -0.116255 -0.151785\nKM -0.565016 0.495199 1.000000 ... -0.080743 0.296281 -0.036021\nFuelType 0.022730 0.033599 -0.356238 ... 0.073860 -0.499114 -0.018434\nHP 0.308414 -0.152946 -0.332984 ... 0.013753 0.053466 0.096938\n\n[5 rows x 9 columns]\n"
],
[
"columns = np.full((len(new_data_df.columns),), True, dtype=bool)\nfor i in range(corr.shape[0]):\n for j in range(i+1, corr.shape[0]):\n if corr.iloc[i,j] >= 0.9:\n if columns[j]:\n columns[j] = False\nselected_columns = new_data_df.columns[columns]\nprint(selected_columns)\n\nnew_data_df = new_data_df[selected_columns]",
"Index(['Price', 'Age', 'KM', 'FuelType', 'HP', 'MetColor', 'Automatic', 'CC',\n 'Doors'],\n dtype='object')\n"
],
[
"# Step 5a : Perform scaling and standardization\nnew_X = new_data_df.iloc[:, :-1].values\nscaler = MinMaxScaler()\nstd = StandardScaler()\nnew_X[:,0:3] = std.fit_transform(scaler.fit_transform(new_X[:,0:3]))\nnew_X[:,4:5] = std.fit_transform(scaler.fit_transform(new_X[:,4:5]))\nnew_X[:,7:9] = std.fit_transform(scaler.fit_transform(new_X[:,7:9]))\n\nprint(\"Dataset after preprocessing\\n\\n\",new_data_df)",
"Dataset after preprocessing\n\n Price Age KM ... Automatic CC Doors\n0 0.763763 -1.822802e+00 -0.583476 ... 0.0 2.314976 3.0\n1 0.832715 -1.822802e+00 0.115551 ... 0.0 2.314976 3.0\n2 0.887877 -1.767012e+00 -0.725566 ... 0.0 2.314976 3.0\n3 1.163685 -1.655430e+00 -0.556163 ... 0.0 2.314976 3.0\n4 0.832715 -1.432267e+00 -0.812059 ... 0.0 2.314976 3.0\n... ... ... ... ... ... ... ...\n1431 -0.891089 -4.893269e-16 -1.295729 ... 0.0 -1.425994 3.0\n1432 0.031491 9.109418e-01 0.000000 ... 0.0 -1.425994 3.0\n1433 -0.615281 -4.893269e-16 -1.390761 ... 0.0 -1.425994 3.0\n1434 -0.960042 7.993604e-01 0.000000 ... 0.0 -1.425994 3.0\n1435 -1.042784 1.134105e+00 -1.849084 ... 0.0 0.177279 5.0\n\n[1436 rows x 9 columns]\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0d9e0b00802f1ca0251f623a046613037aab44b | 44,592 | ipynb | Jupyter Notebook | ConvolutionalNeuralNetworks/Week4/Exercise_4_Multi_class_classifier_Question-FINAL.ipynb | wilsonjr/Tensorflow-Developer | fb0feccf9f35b76014363f70501f40a58858b177 | [
"MIT"
] | null | null | null | ConvolutionalNeuralNetworks/Week4/Exercise_4_Multi_class_classifier_Question-FINAL.ipynb | wilsonjr/Tensorflow-Developer | fb0feccf9f35b76014363f70501f40a58858b177 | [
"MIT"
] | null | null | null | ConvolutionalNeuralNetworks/Week4/Exercise_4_Multi_class_classifier_Question-FINAL.ipynb | wilsonjr/Tensorflow-Developer | fb0feccf9f35b76014363f70501f40a58858b177 | [
"MIT"
] | null | null | null | 124.907563 | 17,584 | 0.861365 | [
[
[
"# ATTENTION: Please do not alter any of the provided code in the exercise. Only add your own code where indicated\n# ATTENTION: Please do not add or remove any cells in the exercise. The grader will check specific cells based on the cell position.\n# ATTENTION: Please use the provided epoch values when training.\n\nimport csv\nimport numpy as np\nimport tensorflow as tf\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\nfrom os import getcwd",
"_____no_output_____"
],
[
"def get_data(filename):\n # You will need to write code that will read the file passed\n # into this function. The first line contains the column headers\n # so you should ignore it\n # Each successive line contians 785 comma separated values between 0 and 255\n # The first value is the label\n # The rest are the pixel values for that picture\n # The function will return 2 np.array types. One with all the labels\n # One with all the images\n #\n # Tips: \n # If you read a full line (as 'row') then row[0] has the label\n # and row[1:785] has the 784 pixel values\n # Take a look at np.array_split to turn the 784 pixels into 28x28\n # You are reading in strings, but need the values to be floats\n # Check out np.array().astype for a conversion\n with open(filename) as training_file:\n # Your code starts here\n labels = []\n images = []\n \n training_file.readline()\n \n while True:\n \n row = training_file.readline()\n if not row:\n break\n \n row = row.split(',')\n row = np.array([float(x) for x in row])\n \n labels.append(row[0])\n images.append(row[1:].reshape((28, 28)))\n \n images = np.array(images)\n labels = np.array(labels)\n \n # Your code ends here\n return images, labels\n\npath_sign_mnist_train = f\"{getcwd()}/../tmp2/sign_mnist_train.csv\"\npath_sign_mnist_test = f\"{getcwd()}/../tmp2/sign_mnist_test.csv\"\ntraining_images, training_labels = get_data(path_sign_mnist_train)\ntesting_images, testing_labels = get_data(path_sign_mnist_test)\n\n# Keep these\nprint(training_images.shape)\nprint(training_labels.shape)\nprint(testing_images.shape)\nprint(testing_labels.shape)\n\n# Their output should be:\n# (27455, 28, 28)\n# (27455,)\n# (7172, 28, 28)\n# (7172,)",
"(27455, 28, 28)\n(27455,)\n(7172, 28, 28)\n(7172,)\n"
],
[
"# In this section you will have to add another dimension to the data\n# So, for example, if your array is (10000, 28, 28)\n# You will need to make it (10000, 28, 28, 1)\n# Hint: np.expand_dims\n\ntraining_images = np.expand_dims(training_images, 3)\ntesting_images = np.expand_dims(testing_images, 3)\n\n# Create an ImageDataGenerator and do Image Augmentation\ntrain_datagen = ImageDataGenerator(\n rescale=1.0/255.0,\n rotation_range=40,\n width_shift_range=0.2,\n height_shift_range=0.2,\n shear_range=0.2,\n zoom_range=0.2,\n horizontal_flip=True,\n fill_mode='nearest'\n )\n\nvalidation_datagen = ImageDataGenerator(\n rescale=1.0/255.0)\n\ntrain_generator = train_datagen.flow(training_images,\n training_labels,\n batch_size=64)\n\nvalidation_generator = validation_datagen.flow(testing_images,\n testing_labels,\n batch_size=64)\n \n# Keep These\nprint(training_images.shape)\nprint(testing_images.shape)\n \n# Their output should be:\n# (27455, 28, 28, 1)\n# (7172, 28, 28, 1)",
"(27455, 28, 28, 1)\n(7172, 28, 28, 1)\n"
],
[
"# Define the model\n# Use no more than 2 Conv2D and 2 MaxPooling2D\nmodel = tf.keras.models.Sequential([\n tf.keras.layers.Conv2D(64, (3, 3), activation='relu', input_shape=(28, 28, 1)),\n tf.keras.layers.MaxPooling2D(2, 2),\n tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),\n tf.keras.layers.MaxPooling2D(2, 2),\n \n tf.keras.layers.Flatten(),\n tf.keras.layers.Dropout(0.2),\n \n tf.keras.layers.Dense(512, activation='relu'),\n \n tf.keras.layers.Dense(25, activation='softmax')\n])\n\n# Compile Model. \nmodel.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['acc'])\n\n# Train the Model\nhistory = model.fit_generator(train_generator, epochs=2, steps_per_epoch=training_images.shape[0]/32,\n validation_data=validation_generator, validation_steps=testing_images.shape[0]/32)\n\nmodel.evaluate(testing_images, testing_labels, verbose=0)",
"Epoch 1/2\n858/857 [==============================] - 82s 95ms/step - loss: 2.4453 - acc: 0.2521 - val_loss: 1.2629 - val_acc: 0.5782\nEpoch 2/2\n858/857 [==============================] - 80s 93ms/step - loss: 1.4929 - acc: 0.5229 - val_loss: 0.6480 - val_acc: 0.7704\n"
],
[
"# Plot the chart for accuracy and loss on both training and validation\n%matplotlib inline\nimport matplotlib.pyplot as plt\nacc = history.history['acc']\nval_acc = history.history['val_acc']\nloss = history.history['loss']\nval_loss = history.history['val_loss']\n\nepochs = range(len(acc))\n\nplt.plot(epochs, acc, 'r', label='Training accuracy')\nplt.plot(epochs, val_acc, 'b', label='Validation accuracy')\nplt.title('Training and validation accuracy')\nplt.legend()\nplt.figure()\n\nplt.plot(epochs, loss, 'r', label='Training Loss')\nplt.plot(epochs, val_loss, 'b', label='Validation Loss')\nplt.title('Training and validation loss')\nplt.legend()\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Submission Instructions",
"_____no_output_____"
]
],
[
[
"# Now click the 'Submit Assignment' button above.",
"_____no_output_____"
]
],
[
[
"# When you're done or would like to take a break, please run the two cells below to save your work and close the Notebook. This will free up resources for your fellow learners. ",
"_____no_output_____"
]
],
[
[
"%%javascript\n<!-- Save the notebook -->\nIPython.notebook.save_checkpoint();",
"_____no_output_____"
],
[
"%%javascript\nIPython.notebook.session.delete();\nwindow.onbeforeunload = null\nsetTimeout(function() { window.close(); }, 1000);",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0d9e10675c60f480587ba2c8979555c6c50d5d9 | 995,718 | ipynb | Jupyter Notebook | example/octavemagic_extension.ipynb | ankostis/oct2py | cebb3f79a8c33365efe3de364d57a9afe3f73c06 | [
"MIT"
] | 195 | 2015-01-15T15:07:56.000Z | 2022-03-16T05:15:16.000Z | example/octavemagic_extension.ipynb | ankostis/oct2py | cebb3f79a8c33365efe3de364d57a9afe3f73c06 | [
"MIT"
] | 140 | 2015-01-01T01:09:09.000Z | 2022-03-30T15:30:33.000Z | example/octavemagic_extension.ipynb | ankostis/oct2py | cebb3f79a8c33365efe3de364d57a9afe3f73c06 | [
"MIT"
] | 37 | 2015-12-22T18:52:13.000Z | 2021-11-15T20:02:03.000Z | 113.394602 | 132,882 | 0.622128 | [
[
[
"# octavemagic: Octave inside IPython",
"_____no_output_____"
],
[
"## Installation",
"_____no_output_____"
],
[
"The `octavemagic` extension provides the ability to interact with Octave. It is provided by the `oct2py` package,\nwhich may be installed using `pip` or `easy_install`.\n\nTo enable the extension, load it as follows:",
"_____no_output_____"
]
],
[
[
"%load_ext oct2py.ipython",
"_____no_output_____"
]
],
[
[
"## Overview",
"_____no_output_____"
],
[
"Loading the extension enables three magic functions: `%octave`, `%octave_push`, and `%octave_pull`.\n\nThe first is for executing one or more lines of Octave, while the latter allow moving variables between the Octave and Python workspace.\nHere you see an example of how to execute a single line of Octave, and how to transfer the generated value back to Python:",
"_____no_output_____"
]
],
[
[
"x = %octave [1 2; 3 4];\nx",
"_____no_output_____"
],
[
"a = [1, 2, 3]\n\n%octave_push a\n%octave a = a * 2;\n%octave_pull a\na",
"_____no_output_____"
]
],
[
[
"When using the cell magic, `%%octave` (note the double `%`), multiple lines of Octave can be executed together. Unlike\nwith the single cell magic, no value is returned, so we use the `-i` and `-o` flags to specify input and output variables. Also note the use of the semicolon to suppress the Octave output.",
"_____no_output_____"
]
],
[
[
"%%octave -i x -o U,S,V\n[U, S, V] = svd(x);",
"_____no_output_____"
],
[
"print(U, S, V)",
"[[-0.40455358 -0.9145143 ]\n [-0.9145143 0.40455358]] [[ 5.4649857 0. ]\n [ 0. 0.36596619]] [[-0.57604844 0.81741556]\n [-0.81741556 -0.57604844]]\n"
]
],
[
[
"## Plotting",
"_____no_output_____"
],
[
"Plot output is automatically captured and displayed, and using the `-f` flag you may choose its format (currently, `png` and `svg` are supported).",
"_____no_output_____"
]
],
[
[
"%%octave -f svg\n\np = [12 -2.5 -8 -0.1 8];\nx = 0:0.01:1;\n\npolyout(p, 'x')\nplot(x, polyval(p, x));",
"_____no_output_____"
]
],
[
[
"The width or the height can be specified to constrain the image while maintaining the original aspect ratio.",
"_____no_output_____"
]
],
[
[
"%%octave -f png -w 600\n\n% butterworth filter, order 2, cutoff pi/2 radians\nb = [0.292893218813452 0.585786437626905 0.292893218813452];\na = [1 0 0.171572875253810];\nfreqz(b, a, 32);",
"_____no_output_____"
],
[
"%%octave -s 600,200 -f png\n\n% Note: On Windows, this will not show the plots unless Ghostscript is installed.\n\nsubplot(121);\n[x, y] = meshgrid(0:0.1:3);\nr = sin(x - 0.5).^2 + cos(y - 0.5).^2;\nsurf(x, y, r);\n\nsubplot(122);\nsombrero()",
"_____no_output_____"
]
],
[
[
"Multiple figures can be drawn. Note that when using imshow the image will be created as a PNG with the raw\nimage dimensions.",
"_____no_output_____"
]
],
[
[
"%%octave -f svg -h 300\nsombrero\nfigure\nimshow(rand(200,200))",
"_____no_output_____"
]
],
[
[
"Plots can be drawn inline (default) or bring up the Octave plotting GUI by using the -g (or --gui) flag: ",
"_____no_output_____"
]
],
[
[
"%%octave -g\nplot([1,2,3])\n# brings up an Octave plotting GUI",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0d9eb53bea963ebfe993839cf7dfb3e5fb2c97a | 20,241 | ipynb | Jupyter Notebook | how-to-use-azureml/automated-machine-learning/forecasting-grouping/auto-ml-forecasting-grouping.ipynb | MustAl-Du/MachineLearningNotebooks | a85cf47a5b923463bdc5a14bfd7a3ec0d46dd35d | [
"MIT"
] | null | null | null | how-to-use-azureml/automated-machine-learning/forecasting-grouping/auto-ml-forecasting-grouping.ipynb | MustAl-Du/MachineLearningNotebooks | a85cf47a5b923463bdc5a14bfd7a3ec0d46dd35d | [
"MIT"
] | null | null | null | how-to-use-azureml/automated-machine-learning/forecasting-grouping/auto-ml-forecasting-grouping.ipynb | MustAl-Du/MachineLearningNotebooks | a85cf47a5b923463bdc5a14bfd7a3ec0d46dd35d | [
"MIT"
] | 1 | 2021-06-02T06:31:15.000Z | 2021-06-02T06:31:15.000Z | 36.735027 | 324 | 0.553826 | [
[
[
"Copyright (c) Microsoft Corporation. All rights reserved.\n\nLicensed under the MIT License.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"# Automated Machine Learning\n\n_**Forecasting with grouping using Pipelines**_\n\n## Contents\n\n1. [Introduction](#Introduction)\n2. [Setup](#Setup)\n3. [Data](#Data)\n4. [Compute](#Compute)\n4. [AutoMLConfig](#AutoMLConfig)\n5. [Pipeline](#Pipeline)\n5. [Train](#Train)\n6. [Test](#Test)\n\n\n## Introduction\nIn this example we use Automated ML and Pipelines to train, select, and operationalize forecasting models for multiple time-series.\n\nIf you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first if you haven't already to establish your connection to the AzureML Workspace.\n\nIn this notebook you will learn how to:\n\n* Create an Experiment in an existing Workspace.\n* Configure AutoML using AutoMLConfig.\n* Use our helper script to generate pipeline steps to split, train, and deploy the models.\n* Explore the results.\n* Test the models.\n\nIt is advised you ensure your cluster has at least one node per group.\n\nAn Enterprise workspace is required for this notebook. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade)\n\n## Setup\nAs part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments. ",
"_____no_output_____"
]
],
[
[
"import json\nimport logging\nimport warnings\n\nimport numpy as np\nimport pandas as pd\n\nimport azureml.core\n\nfrom azureml.core.workspace import Workspace\nfrom azureml.core.experiment import Experiment\nfrom azureml.train.automl import AutoMLConfig",
"_____no_output_____"
]
],
[
[
"Accessing the Azure ML workspace requires authentication with Azure.\n\nThe default authentication is interactive authentication using the default tenant. Executing the ws = Workspace.from_config() line in the cell below will prompt for authentication the first time that it is run.\n\nIf you have multiple Azure tenants, you can specify the tenant by replacing the ws = Workspace.from_config() line in the cell below with the following:\n```\nfrom azureml.core.authentication import InteractiveLoginAuthentication\nauth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')\nws = Workspace.from_config(auth = auth)\n```\nIf you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the ws = Workspace.from_config() line in the cell below with the following:\n```\nfrom azureml.core.authentication import ServicePrincipalAuthentication\nauth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')\nws = Workspace.from_config(auth = auth)\n```\nFor more details, see aka.ms/aml-notebook-auth",
"_____no_output_____"
]
],
[
[
"ws = Workspace.from_config()\nds = ws.get_default_datastore()\n\n# choose a name for the run history container in the workspace\nexperiment_name = 'automl-grouping-oj'\n# project folder\nproject_folder = './sample_projects/{}'.format(experiment_name)\n\nexperiment = Experiment(ws, experiment_name)\n\noutput = {}\noutput['SDK version'] = azureml.core.VERSION\noutput['Subscription ID'] = ws.subscription_id\noutput['Workspace'] = ws.name\noutput['Resource Group'] = ws.resource_group\noutput['Location'] = ws.location\noutput['Project Directory'] = project_folder\noutput['Run History Name'] = experiment_name\npd.set_option('display.max_colwidth', -1)\noutputDf = pd.DataFrame(data = output, index = [''])\noutputDf.T",
"_____no_output_____"
]
],
[
[
"## Data\nUpload data to your default datastore and then load it as a `TabularDataset`",
"_____no_output_____"
]
],
[
[
"from azureml.core.dataset import Dataset",
"_____no_output_____"
],
[
"# upload training and test data to your default datastore\nds = ws.get_default_datastore()\nds.upload(src_dir='./data', target_path='groupdata', overwrite=True, show_progress=True)",
"_____no_output_____"
],
[
"# load data from your datastore\ndata = Dataset.Tabular.from_delimited_files(path=ds.path('groupdata/dominicks_OJ_2_5_8_train.csv'))\ndata_test = Dataset.Tabular.from_delimited_files(path=ds.path('groupdata/dominicks_OJ_2_5_8_test.csv'))\n\ndata.take(5).to_pandas_dataframe()",
"_____no_output_____"
]
],
[
[
"## Compute \n\n#### Create or Attach existing AmlCompute\n\nYou will need to create a compute target for your automated ML run. In this tutorial, you create AmlCompute as your training compute resource.\n#### Creation of AmlCompute takes approximately 5 minutes. \nIf the AmlCompute with that name is already in your workspace this code will skip the creation process.\nAs with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this article on the default limits and how to request more quota.",
"_____no_output_____"
]
],
[
[
"from azureml.core.compute import AmlCompute\nfrom azureml.core.compute import ComputeTarget\n\n# Choose a name for your cluster.\namlcompute_cluster_name = \"cpu-cluster-11\"\n\nfound = False\n# Check if this compute target already exists in the workspace.\ncts = ws.compute_targets\nif amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == 'AmlCompute':\n found = True\n print('Found existing compute target.')\n compute_target = cts[amlcompute_cluster_name]\n \nif not found:\n print('Creating a new compute target...')\n provisioning_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_D2_V2\", # for GPU, use \"STANDARD_NC6\"\n #vm_priority = 'lowpriority', # optional\n max_nodes = 6)\n\n # Create the cluster.\n compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, provisioning_config)\n \nprint('Checking cluster status...')\n# Can poll for a minimum number of nodes and for a specific timeout.\n# If no min_node_count is provided, it will use the scale settings for the cluster.\ncompute_target.wait_for_completion(show_output = True, min_node_count = None, timeout_in_minutes = 20)\n \n# For a more detailed view of current AmlCompute status, use get_status().",
"_____no_output_____"
]
],
[
[
"## AutoMLConfig\n#### Create a base AutoMLConfig\nThis configuration will be used for all the groups in the pipeline.",
"_____no_output_____"
]
],
[
[
"target_column = 'Quantity'\ntime_column_name = 'WeekStarting'\ngrain_column_names = ['Brand']\ngroup_column_names = ['Store']\nmax_horizon = 20",
"_____no_output_____"
],
[
"automl_settings = {\n \"iteration_timeout_minutes\" : 5,\n \"experiment_timeout_minutes\" : 15,\n \"primary_metric\" : 'normalized_mean_absolute_error',\n \"time_column_name\": time_column_name,\n \"grain_column_names\": grain_column_names,\n \"max_horizon\": max_horizon,\n \"drop_column_names\": ['logQuantity'],\n \"max_concurrent_iterations\": 2,\n \"max_cores_per_iteration\": -1\n}\nbase_configuration = AutoMLConfig(task = 'forecasting',\n path = project_folder,\n n_cross_validations=3,\n **automl_settings\n )",
"_____no_output_____"
]
],
[
[
"## Pipeline\nWe've written a script to generate the individual pipeline steps used to create each automl step. Calling this script will return a list of PipelineSteps that will train multiple groups concurrently and then deploy these models.\n\nThis step requires an Enterprise workspace to gain access to this feature. To learn more about creating an Enterprise workspace or upgrading to an Enterprise workspace from the Azure portal, please visit our [Workspace page.](https://docs.microsoft.com/azure/machine-learning/service/concept-workspace#upgrade).\n\n### Call the method to build pipeline steps\n\n`build_pipeline_steps()` takes as input:\n* **automlconfig**: This is the configuration used for every automl step\n* **df**: This is the dataset to be used for training\n* **target_column**: This is the target column of the dataset\n* **compute_target**: The compute to be used for training\n* **deploy**: The option on to deploy the models after training, if set to true an extra step will be added to deploy a webservice with all the models (default is `True`)\n* **service_name**: The service name for the model query endpoint\n* **time_column_name**: The time column of the data",
"_____no_output_____"
]
],
[
[
"from azureml.core.webservice import Webservice\nfrom azureml.exceptions import WebserviceException\n\nservice_name = 'grouped-model'\ntry:\n # if you want to get existing service below is the command\n # since aci name needs to be unique in subscription deleting existing aci if any\n # we use aci_service_name to create azure aci\n service = Webservice(ws, name=service_name)\n if service:\n service.delete()\nexcept WebserviceException as e:\n pass",
"_____no_output_____"
],
[
"from build import build_pipeline_steps\n\nsteps = build_pipeline_steps(\n base_configuration, \n data, \n target_column,\n compute_target, \n group_column_names=group_column_names, \n deploy=True, \n service_name=service_name, \n time_column_name=time_column_name\n)",
"_____no_output_____"
]
],
[
[
"## Train\nUse the list of steps generated from above to build the pipeline and submit it to your compute for remote training.",
"_____no_output_____"
]
],
[
[
"from azureml.pipeline.core import Pipeline\npipeline = Pipeline(\n description=\"A pipeline with one model per data group using Automated ML.\",\n workspace=ws, \n steps=steps)\n\npipeline_run = experiment.submit(pipeline)",
"_____no_output_____"
],
[
"from azureml.widgets import RunDetails\nRunDetails(pipeline_run).show()",
"_____no_output_____"
],
[
"pipeline_run.wait_for_completion(show_output=False)",
"_____no_output_____"
]
],
[
[
"## Test\n\nNow we can use the holdout set to test our models and ensure our web-service is running as expected.",
"_____no_output_____"
]
],
[
[
"from azureml.core.webservice import AciWebservice\nservice = AciWebservice(ws, service_name)",
"_____no_output_____"
],
[
"X_test = data_test.to_pandas_dataframe()\n# Drop the column we are trying to predict (target column)\nx_pred = X_test.drop(target_column, inplace=False, axis=1)\nx_pred.head()",
"_____no_output_____"
],
[
"# Get Predictions\ntest_sample = X_test.drop(target_column, inplace=False, axis=1).to_json()\npredictions = service.run(input_data=test_sample)\nprint(predictions)",
"_____no_output_____"
],
[
"# Convert predictions from JSON to DataFrame\npred_dict =json.loads(predictions)\nX_pred = pd.read_json(pred_dict['predictions'])\nX_pred.head()",
"_____no_output_____"
],
[
"# Fix the index\nPRED = 'pred_target'\nX_pred[time_column_name] = pd.to_datetime(X_pred[time_column_name], unit='ms')\n\nX_pred.set_index([time_column_name] + grain_column_names, inplace=True, drop=True)\nX_pred.rename({'_automl_target_col': PRED}, inplace=True, axis=1)\n# Drop all but the target column and index\nX_pred.drop(list(set(X_pred.columns.values).difference({PRED})), axis=1, inplace=True)",
"_____no_output_____"
],
[
"X_test[time_column_name] = pd.to_datetime(X_test[time_column_name])\nX_test.set_index([time_column_name] + grain_column_names, inplace=True, drop=True)\n# Merge predictions with raw features\npred_test = X_test.merge(X_pred, left_index=True, right_index=True)\npred_test.head()",
"_____no_output_____"
],
[
"from sklearn.metrics import mean_absolute_error, mean_squared_error\ndef MAPE(actual, pred):\n \"\"\"\n Calculate mean absolute percentage error.\n Remove NA and values where actual is close to zero\n \"\"\"\n not_na = ~(np.isnan(actual) | np.isnan(pred))\n not_zero = ~np.isclose(actual, 0.0)\n actual_safe = actual[not_na & not_zero]\n pred_safe = pred[not_na & not_zero]\n APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)\n return np.mean(APE)\n\ndef get_metrics(actuals, preds):\n return pd.Series(\n {\n \"RMSE\": np.sqrt(mean_squared_error(actuals, preds)),\n \"NormRMSE\": np.sqrt(mean_squared_error(actuals, preds))/np.abs(actuals.max()-actuals.min()),\n \"MAE\": mean_absolute_error(actuals, preds),\n \"MAPE\": MAPE(actuals, preds)},\n )",
"_____no_output_____"
],
[
"get_metrics(pred_test[PRED].values, pred_test[target_column].values)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0d9ed4377be0ce2c45572e723ab0a30ecc35bb2 | 124,428 | ipynb | Jupyter Notebook | ex05-Filtering a Query with WHERE.ipynb | hoganbyun/Practice-SQL-with-SQLite-and-Jupyter-Notebook | b553020e69cafb0e6ee64ebc5ec683d938a621b4 | [
"MIT"
] | 78 | 2018-10-11T22:44:38.000Z | 2022-03-18T00:06:48.000Z | ex05-Filtering a Query with WHERE.ipynb | hoganbyun/Practice-SQL-with-SQLite-and-Jupyter-Notebook | b553020e69cafb0e6ee64ebc5ec683d938a621b4 | [
"MIT"
] | 1 | 2019-12-28T20:28:15.000Z | 2019-12-29T11:28:51.000Z | ex05-Filtering a Query with WHERE.ipynb | hoganbyun/Practice-SQL-with-SQLite-and-Jupyter-Notebook | b553020e69cafb0e6ee64ebc5ec683d938a621b4 | [
"MIT"
] | 46 | 2019-03-07T08:50:40.000Z | 2022-03-07T02:24:25.000Z | 34.814773 | 534 | 0.360747 | [
[
[
"# ex05-Filtering a Query with WHERE\n\nSometimes, you’ll want to only check the rows returned by a query, where one or more columns meet certain criteria. This can be done with a WHERE statement. The WHERE clause is an optional clause of the SELECT statement. It appears after the FROM clause as the following statement:\n>SELECT column_list FROM table_name WHERE search_condition;",
"_____no_output_____"
]
],
[
[
"%load_ext sql",
"_____no_output_____"
]
],
[
[
"### 1. Connet to the given database of demo.db3",
"_____no_output_____"
]
],
[
[
"%sql sqlite:///data/demo.db3",
"_____no_output_____"
]
],
[
[
"If you do not remember the tables in the demo data, you can always use the follow command to query. Here we select the table of watershed_yearly as an example.",
"_____no_output_____"
]
],
[
[
"%sql SELECT name FROM sqlite_master WHERE type='table'",
" * sqlite:///data/demo.db3\nDone.\n"
]
],
[
[
"### 2. Retrieving data with WHERE\n\nTake the table of ***rch*** as an example.\n\n#### 2.1 Check the table colums firstly.",
"_____no_output_____"
]
],
[
[
"%sql SELECT * From rch LIMIT 5",
" * sqlite:///data/demo.db3\nDone.\n"
]
],
[
[
"#### 2.2 Check the number of rows\n\nThere should be 8280 rows. This can be done with the SQLite ***COUNT*** function. We will touch other SQLite function over the next few notebooks.",
"_____no_output_____"
]
],
[
[
"%sql SELECT COUNT(*) as nrow From rch",
" * sqlite:///data/demo.db3\nDone.\n"
]
],
[
[
"#### 2.3 Use WHERE to retrieve data\n\nLet’s say we are interested in records for only the year 1981. Using a WHERE is pretty straightforward for a simple criterion like this. ",
"_____no_output_____"
]
],
[
[
"%sql SELECT RCH, YR, MO, FLOW_INcms, FLOW_OUTcms From rch WHERE YR=1981",
" * sqlite:///data/demo.db3\nDone.\n"
]
],
[
[
"#### 2.4 use *AND* to further filter data\n\nThere are 23 RCHs. We are only intersted in the 10th RCH. We can add another filter condition with an ***AND*** statement.",
"_____no_output_____"
]
],
[
[
"%sql SELECT RCH, YR, MO, FLOW_INcms, FLOW_OUTcms From rch WHERE YR=1981 AND RCH=10",
" * sqlite:///data/demo.db3\nDone.\n"
]
],
[
[
"#### 2.5 More combinations of filters",
"_____no_output_____"
],
[
"We also can further filter data with the operators of ***!=*** or ***<>*** to get data except 1981.",
"_____no_output_____"
]
],
[
[
"%sql SELECT RCH, YR, MO, FLOW_INcms, FLOW_OUTcms From rch WHERE YR<>1981 and RCH=10 and MO=6",
" * sqlite:///data/demo.db3\nDone.\n"
]
],
[
[
"We can further filter the data to spefic months using ***OR*** statement. For example, we'd like check the data in the months of 3, 6 and 9. However, we have to use ***()*** to make them as one condition.:) It is a trick. You can try!",
"_____no_output_____"
]
],
[
[
"%sql SELECT RCH, YR, MO, FLOW_INcms, FLOW_OUTcms From rch WHERE YR>2009 and RCH=10 and (MO=3 or MO=6 or MO=9 or MO=12)",
" * sqlite:///data/demo.db3\nDone.\n"
]
],
[
[
"Or we can simplify the above filter using the ***IN*** statement.",
"_____no_output_____"
]
],
[
[
"%sql SELECT RCH, YR, MO, FLOW_INcms, FLOW_OUTcms From rch WHERE YR>2009 and RCH=10 and MO in (3, 6, 9, 12)",
" * sqlite:///data/demo.db3\nDone.\n"
]
],
[
[
"Or the months are ***NOT*** in 3, 6, 9, 12",
"_____no_output_____"
]
],
[
[
"%sql SELECT RCH, YR, MO, FLOW_INcms, FLOW_OUTcms From rch WHERE YR>2009 and RCH=10 and MO NOT IN (3,6,9,12)",
" * sqlite:///data/demo.db3\nDone.\n"
]
],
[
[
"#### 2.6 Filter with math operators\n\nFor example, we could use the modulus operator (%) to filter the MOs.",
"_____no_output_____"
]
],
[
[
"%sql SELECT RCH, YR, MO, FLOW_INcms, FLOW_OUTcms From rch WHERE YR>2009 and RCH=10 and MO % 3 = 0",
" * sqlite:///data/demo.db3\nDone.\n"
]
],
[
[
"### Summary\n\nIn the WHERE statement, we can the combinations of ***NOT, IN, <>, !=, >=, >, <, <=, AND, OR, ()*** and even some of math operators (such as %, *, /, +, -)to retrieve the data we want easily and efficiently. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0d9ed7621927798c2b396589e4223d042e2e209 | 104,511 | ipynb | Jupyter Notebook | Quantum Cryptography/Quantum_inspired_digital_signatures.ipynb | NimishMishra/research | 2448fc9da643429117b851cb582d490cfe46d422 | [
"MIT"
] | 4 | 2020-06-21T07:22:27.000Z | 2021-07-12T16:51:20.000Z | Quantum Cryptography/Quantum_inspired_digital_signatures.ipynb | NimishMishra/research | 2448fc9da643429117b851cb582d490cfe46d422 | [
"MIT"
] | 2 | 2020-12-30T18:06:26.000Z | 2020-12-30T18:06:26.000Z | Quantum Cryptography/Quantum_inspired_digital_signatures.ipynb | NimishMishra/research | 2448fc9da643429117b851cb582d490cfe46d422 | [
"MIT"
] | null | null | null | 50.366747 | 5,602 | 0.511956 | [
[
[
"!pip3 install qiskit",
"Requirement already satisfied: qiskit in /usr/local/lib/python3.6/dist-packages (0.19.4)\nRequirement already satisfied: qiskit-ignis==0.3.0 in /usr/local/lib/python3.6/dist-packages (from qiskit) (0.3.0)\nRequirement already satisfied: qiskit-terra==0.14.2 in /usr/local/lib/python3.6/dist-packages (from qiskit) (0.14.2)\nRequirement already satisfied: qiskit-aer==0.5.2 in /usr/local/lib/python3.6/dist-packages (from qiskit) (0.5.2)\nRequirement already satisfied: qiskit-aqua==0.7.2 in /usr/local/lib/python3.6/dist-packages (from qiskit) (0.7.2)\nRequirement already satisfied: qiskit-ibmq-provider==0.7.2 in /usr/local/lib/python3.6/dist-packages (from qiskit) (0.7.2)\nRequirement already satisfied: scipy!=0.19.1,>=0.19 in /usr/local/lib/python3.6/dist-packages (from qiskit-ignis==0.3.0->qiskit) (1.4.1)\nRequirement already satisfied: setuptools>=40.1.0 in /usr/local/lib/python3.6/dist-packages (from qiskit-ignis==0.3.0->qiskit) (47.3.1)\nRequirement already satisfied: numpy>=1.13 in /usr/local/lib/python3.6/dist-packages (from qiskit-ignis==0.3.0->qiskit) (1.18.5)\nRequirement already satisfied: marshmallow<4,>=3 in /usr/local/lib/python3.6/dist-packages (from qiskit-terra==0.14.2->qiskit) (3.6.1)\nRequirement already satisfied: psutil>=5 in /usr/local/lib/python3.6/dist-packages (from qiskit-terra==0.14.2->qiskit) (5.4.8)\nRequirement already satisfied: retworkx>=0.3.2 in /usr/local/lib/python3.6/dist-packages (from qiskit-terra==0.14.2->qiskit) (0.3.4)\nRequirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.6/dist-packages (from qiskit-terra==0.14.2->qiskit) (2.6.0)\nRequirement already satisfied: ply>=3.10 in /usr/local/lib/python3.6/dist-packages (from qiskit-terra==0.14.2->qiskit) (3.11)\nRequirement already satisfied: sympy>=1.3 in /usr/local/lib/python3.6/dist-packages (from qiskit-terra==0.14.2->qiskit) (1.6)\nRequirement already satisfied: python-dateutil>=2.8.0 in /usr/local/lib/python3.6/dist-packages (from qiskit-terra==0.14.2->qiskit) (2.8.1)\nRequirement already satisfied: marshmallow-polyfield<6,>=5.7 in /usr/local/lib/python3.6/dist-packages (from qiskit-terra==0.14.2->qiskit) (5.9)\nRequirement already satisfied: python-constraint>=1.4 in /usr/local/lib/python3.6/dist-packages (from qiskit-terra==0.14.2->qiskit) (1.4.0)\nRequirement already satisfied: fastjsonschema>=2.10 in /usr/local/lib/python3.6/dist-packages (from qiskit-terra==0.14.2->qiskit) (2.14.4)\nRequirement already satisfied: dill>=0.3 in /usr/local/lib/python3.6/dist-packages (from qiskit-terra==0.14.2->qiskit) (0.3.1.1)\nRequirement already satisfied: networkx>=2.2; python_version > \"3.5\" in /usr/local/lib/python3.6/dist-packages (from qiskit-terra==0.14.2->qiskit) (2.4)\nRequirement already satisfied: cython>=0.27.1 in /usr/local/lib/python3.6/dist-packages (from qiskit-aer==0.5.2->qiskit) (0.29.20)\nRequirement already satisfied: pybind11>=2.4 in /usr/local/lib/python3.6/dist-packages (from qiskit-aer==0.5.2->qiskit) (2.5.0)\nRequirement already satisfied: pyscf; sys_platform != \"win32\" in /usr/local/lib/python3.6/dist-packages (from qiskit-aqua==0.7.2->qiskit) (1.7.3)\nRequirement already satisfied: docplex in /usr/local/lib/python3.6/dist-packages (from qiskit-aqua==0.7.2->qiskit) (2.14.186)\nRequirement already satisfied: scikit-learn>=0.20.0 in /usr/local/lib/python3.6/dist-packages (from qiskit-aqua==0.7.2->qiskit) (0.22.2.post1)\nRequirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from qiskit-aqua==0.7.2->qiskit) (2.10.0)\nRequirement already satisfied: quandl in /usr/local/lib/python3.6/dist-packages (from qiskit-aqua==0.7.2->qiskit) (3.5.0)\nRequirement already satisfied: cvxpy<1.1.0,>1.0.0 in /usr/local/lib/python3.6/dist-packages (from qiskit-aqua==0.7.2->qiskit) (1.0.31)\nRequirement already satisfied: dlx in /usr/local/lib/python3.6/dist-packages (from qiskit-aqua==0.7.2->qiskit) (1.0.4)\nRequirement already satisfied: fastdtw in /usr/local/lib/python3.6/dist-packages (from qiskit-aqua==0.7.2->qiskit) (0.3.4)\nRequirement already satisfied: requests-ntlm>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from qiskit-ibmq-provider==0.7.2->qiskit) (1.1.0)\nRequirement already satisfied: websockets<8,>=7 in /usr/local/lib/python3.6/dist-packages (from qiskit-ibmq-provider==0.7.2->qiskit) (7.0)\nRequirement already satisfied: nest-asyncio!=1.1.0,>=1.0.0 in /usr/local/lib/python3.6/dist-packages (from qiskit-ibmq-provider==0.7.2->qiskit) (1.3.3)\nRequirement already satisfied: requests>=2.19 in /usr/local/lib/python3.6/dist-packages (from qiskit-ibmq-provider==0.7.2->qiskit) (2.23.0)\nRequirement already satisfied: urllib3>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from qiskit-ibmq-provider==0.7.2->qiskit) (1.24.3)\nRequirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.6/dist-packages (from sympy>=1.3->qiskit-terra==0.14.2->qiskit) (1.1.0)\nRequirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.8.0->qiskit-terra==0.14.2->qiskit) (1.12.0)\nRequirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx>=2.2; python_version > \"3.5\"->qiskit-terra==0.14.2->qiskit) (4.4.2)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.20.0->qiskit-aqua==0.7.2->qiskit) (0.15.1)\nRequirement already satisfied: inflection>=0.3.1 in /usr/local/lib/python3.6/dist-packages (from quandl->qiskit-aqua==0.7.2->qiskit) (0.5.0)\nRequirement already satisfied: more-itertools in /usr/local/lib/python3.6/dist-packages (from quandl->qiskit-aqua==0.7.2->qiskit) (8.4.0)\nRequirement already satisfied: pandas>=0.14 in /usr/local/lib/python3.6/dist-packages (from quandl->qiskit-aqua==0.7.2->qiskit) (1.0.4)\nRequirement already satisfied: scs>=1.1.3 in /usr/local/lib/python3.6/dist-packages (from cvxpy<1.1.0,>1.0.0->qiskit-aqua==0.7.2->qiskit) (2.1.2)\nRequirement already satisfied: osqp>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from cvxpy<1.1.0,>1.0.0->qiskit-aqua==0.7.2->qiskit) (0.6.1)\nRequirement already satisfied: ecos>=2 in /usr/local/lib/python3.6/dist-packages (from cvxpy<1.1.0,>1.0.0->qiskit-aqua==0.7.2->qiskit) (2.0.7.post1)\nRequirement already satisfied: multiprocess in /usr/local/lib/python3.6/dist-packages (from cvxpy<1.1.0,>1.0.0->qiskit-aqua==0.7.2->qiskit) (0.70.9)\nRequirement already satisfied: ntlm-auth>=1.0.2 in /usr/local/lib/python3.6/dist-packages (from requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.7.2->qiskit) (1.5.0)\nRequirement already satisfied: cryptography>=1.3 in /usr/local/lib/python3.6/dist-packages (from requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.7.2->qiskit) (2.9.2)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19->qiskit-ibmq-provider==0.7.2->qiskit) (2020.4.5.2)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19->qiskit-ibmq-provider==0.7.2->qiskit) (2.9)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests>=2.19->qiskit-ibmq-provider==0.7.2->qiskit) (3.0.4)\nRequirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.14->quandl->qiskit-aqua==0.7.2->qiskit) (2018.9)\nRequirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from osqp>=0.4.1->cvxpy<1.1.0,>1.0.0->qiskit-aqua==0.7.2->qiskit) (0.16.0)\nRequirement already satisfied: cffi!=1.11.3,>=1.8 in /usr/local/lib/python3.6/dist-packages (from cryptography>=1.3->requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.7.2->qiskit) (1.14.0)\nRequirement already satisfied: pycparser in /usr/local/lib/python3.6/dist-packages (from cffi!=1.11.3,>=1.8->cryptography>=1.3->requests-ntlm>=1.1.0->qiskit-ibmq-provider==0.7.2->qiskit) (2.20)\n"
],
[
"import qiskit",
"_____no_output_____"
],
[
"constant_index_dictionary = {}\nconstant_index_dictionary['0000'] = [0, 2]\nconstant_index_dictionary['0001'] = [2, 3]\nconstant_index_dictionary['0010'] = [0, 1]\nconstant_index_dictionary['0011'] = [1, 3]\nconstant_index_dictionary['0100'] = [2, 3]\nconstant_index_dictionary['0101'] = [1, 2]\nconstant_index_dictionary['0110'] = [0, 2]\nconstant_index_dictionary['0111'] = [0, 2]\nconstant_index_dictionary['1000'] = [0, 3]\nconstant_index_dictionary['1001'] = [0, 1]\nconstant_index_dictionary['1010'] = [1, 2]\nconstant_index_dictionary['1011'] = [0, 3]\nconstant_index_dictionary['1100'] = [1, 3]\nconstant_index_dictionary['1101'] = [2, 3]\nconstant_index_dictionary['1110'] = [1, 3]\nconstant_index_dictionary['1111'] = [0, 1]",
"_____no_output_____"
],
[
"import qiskit\nimport numpy as np\nimport time\n\nCLASSICAL_REGISTER_LENGTH = 5\nQUANTUM_REGISTER_LENGTH = 5\n\ncircuit_building_start_time = time.time()\nsimulator = qiskit.Aer.get_backend('qasm_simulator')\nclassical_register = qiskit.ClassicalRegister(CLASSICAL_REGISTER_LENGTH)\nquantum_register = qiskit.QuantumRegister(QUANTUM_REGISTER_LENGTH)\ncircuit = qiskit.QuantumCircuit(quantum_register, classical_register)\ncircuit_building_end_time = time.time()\n\n\nAND_gate_auxillary_qubit = QUANTUM_REGISTER_LENGTH - 1 # last qubit as the auxillary qubit\n\n'''\n Applies quantum AND operation to specified pair of qubits, stores the operation in AND_gate_auxillary_qubit,\n and stores the result in a classical register\n @PARAMS:\n qubit1: position of the first qubit\n qubit2: position of the second qubit\n qubit1_one: whether the first qubit is NOT\n qubit2_one: whether the second qubit is NOT\n classical_register_position: the classical register position to store the measurement of AND_gate_auxillary_qubit\n'''\ndef AND_2_qubit(qubit1, qubit2, qubit1_one, qubit2_one, classical_register_position):\n \n if(qubit1_one):\n circuit.x(quantum_register[qubit1])\n if(qubit2_one):\n circuit.x(quantum_register[qubit2])\n circuit.ccx(quantum_register[qubit1], quantum_register[qubit2], quantum_register[AND_gate_auxillary_qubit])\n circuit.measure(quantum_register[AND_gate_auxillary_qubit], classical_register[classical_register_position])\n if(qubit1_one):\n circuit.x(quantum_register[qubit1])\n if(qubit2_one):\n circuit.x(quantum_register[qubit2])\n circuit.reset(quantum_register[AND_gate_auxillary_qubit])\n\n'''\n Applies the AND gate operation on a list of n qubits\n @PARAMS:\n qubit_list: list of qubits to perform the operation on\n qubit_one_list: whether each of those qubits is NOT\n @RETURN:\n result of the n-qubit AND operation\n'''\ndef AND_n_qubits(qubit_list, qubit_one_list):\n \n length = len(qubit_list)\n if(length != len(qubit_one_list)):\n print(\"Incorrect dimensions\")\n return\n classical_register_index = 0 # where to store pairwise AND operation results\n\n # handling odd number of qubits by preprocessing the last qubit\n if(length % 2 != 0):\n if(qubit_one_list[length - 1] == 1):\n circuit.x(quantum_register[qubit_list[length-1]])\n circuit.cx(quantum_register[qubit_list[length - 1]], quantum_register[AND_gate_auxillary_qubit])\n circuit.measure(quantum_register[AND_gate_auxillary_qubit], classical_register[classical_register_index])\n circuit.reset(quantum_register[AND_gate_auxillary_qubit])\n classical_register_index = classical_register_index + 1\n if(qubit_one_list[length - 1] == 1):\n circuit.x(quantum_register[qubit_list[length-1]])\n length = length - 1\n\n\n for index in range(length - 1, 0, -2):\n AND_2_qubit(qubit_list[index], qubit_list[index - 1], qubit_one_list[index], qubit_one_list[index - 1], classical_register_index)\n classical_register_index = classical_register_index + 1\n \n job = qiskit.execute(circuit, simulator, shots=1)\n result = job.result()\n counts = str(result.get_counts())\n counts = counts[counts.find('\\'') + 1:]\n counts = counts[:counts.find('\\'')]\n output = 1\n for index in range(0, classical_register_index, 1):\n output = output & int(counts[CLASSICAL_REGISTER_LENGTH - 1 - index])\n \n return output\n\ndef controlled_n_qubit_h(qubit_list, qubit_one_list):\n output = AND_n_qubits(qubit_list, qubit_one_list)\n if(output == 1):\n circuit.h(quantum_register[AND_gate_auxillary_qubit])\n circuit.measure(quantum_register[AND_gate_auxillary_qubit], classical_register[0])\n circuit.reset(quantum_register[AND_gate_auxillary_qubit])\n job = qiskit.execute(circuit, simulator, shots=1)\n result = job.result()\n counts = str(result.get_counts())\n counts = counts[counts.find('\\'') + 1:]\n counts = counts[:counts.find('\\'')]\n return int(counts[len(counts) - 1])\n return 0\n\n'''\n the main circuit for the following truth table:\n A, B, C, D = binary representation input state for the robot\n P, Q, R, S = binary representation of the output state from the robot\n\n New circuit in register...\n'''\n\ndef main_circuit(STEPS, initial_state):\n signature = \"\"\n state = initial_state\n step = 0\n while (step < STEPS):\n\n dont_care_list = constant_index_dictionary[state]\n input_state = state\n state = \"\"\n\n P = controlled_n_qubit_h([0, 1, 3], [1, 1, 0]) | controlled_n_qubit_h([1, 2], [0, 1]) | controlled_n_qubit_h([0, 2, 3], [0, 0, 1]) | AND_n_qubits([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 3], [1, 1, 1]) | AND_n_qubits([1, 2, 3], [1, 1, 1])\n \n Q = controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 1, 1]) | controlled_n_qubit_h([0, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([1, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 0, 1, 0]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 1, 0, 0]) | AND_n_qubits([0, 1, 3], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 1, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 0, 1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 1, 0])\n\n R = controlled_n_qubit_h([0, 1, 2], [1, 1, 0]) | controlled_n_qubit_h([0, 1, 2], [0, 0, 0]) | controlled_n_qubit_h([0, 1, 3], [0, 1, 0]) | controlled_n_qubit_h([0, 2, 3], [0, 1, 1]) | AND_n_qubits([0, 1], [1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 1])\n\n S = controlled_n_qubit_h([1, 2, 3], [1, 0, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 2], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 0])\n\n\n state = state + str(P) + str(Q) + str(R) + str(S) \n \n y = int(input_state, 2)^int(state,2)\n y = bin(y)[2:].zfill(len(state))\n # print(\"\" + str(y) + \" is the XOR string\")\n hamming_distance = len(y.replace('0', \"\"))\n # print(input_state + \" \" + state + \" \" + str(hamming_distance))\n step = step + hamming_distance\n hidden_state = \"\"\n for j in range(len(state)):\n if(j in dont_care_list):\n hidden_state = hidden_state + \"x\"\n else:\n hidden_state = hidden_state + state[j]\n \n # print(state + \" \" + hidden_state)\n signature = signature + hidden_state\n\n for _ in range(len(circuit.data)):\n circuit.data.pop(0)\n\n if(P == 1):\n circuit.x(quantum_register[0])\n \n if(Q == 1):\n circuit.x(quantum_register[1])\n\n if(R == 1):\n circuit.x(quantum_register[2])\n\n if(S == 1):\n circuit.x(quantum_register[3])\n\n print(\"End state: \" + str(P) + str(Q) + str(R) + str(S) )\n print(\"Signature: \" + signature)\n \ndef initialise_starting_state(P, Q, R, S):\n if(P == 1):\n circuit.x(quantum_register[0])\n if(Q == 1):\n circuit.x(quantum_register[1])\n if(R == 1):\n circuit.x(quantum_register[2])\n if(S == 1):\n circuit.x(quantum_register[3])\n print(\"Message: \" + str(P) + str(Q) + str(R) + str(S))\n\ndef measure_time():\n\n total_time = 0\n for i in range(100):\n start_time = time.time()\n # output = AND_n_qubits([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1])\n output = controlled_n_qubit_h([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1])\n \n print(str(i) + \" \" + str(output))\n end_time = time.time()\n total_time = total_time + (end_time - start_time)\n print(\"Average time: \" + str(total_time/100))\n\n\nstart_time = time.time()\ninitialise_starting_state(1, 0, 1, 1) # message to be signed\nSTEPS = 20 # security parameter: length of the walk\nmain_circuit(STEPS, '1011') \n# measure_time()\nend_time = time.time()\nprint(\"Run in time \" + str(end_time - start_time))\nprint(circuit_building_end_time - circuit_building_start_time)",
"Message: 1011\nEnd state: 1111\nSignature: x00x10xxx10x1x1xxx101xx01xx1x01x1x0xx11x0x1xx1x0x0x0xx11\nRun in time 5.65201735496521\n0.0003039836883544922\n"
],
[
"def recipient_initialise_starting_state(P, Q, R, S):\n if(P == \"1\"):\n circuit.x(quantum_register[0])\n if(Q == \"1\"):\n circuit.x(quantum_register[1])\n if(R == \"1\"):\n circuit.x(quantum_register[2])\n if(S == \"1\"):\n circuit.x(quantum_register[3])\n print(\"Message: \" + str(P) + str(Q) + str(R) + str(S))\n\ndef recipient(message, signature, end_state):\n STEPS = len(signature)/len(end_state)\n STEPS = int(STEPS)\n index = 0\n recipient_initialise_starting_state(message[0], message[1], message[2], message[3])\n state = message\n recreated_signature = \"\"\n for _ in range(STEPS):\n\n dont_care_list = constant_index_dictionary[state]\n state = \"\"\n \n P = controlled_n_qubit_h([0, 1, 3], [1, 1, 0]) | controlled_n_qubit_h([1, 2], [0, 1]) | controlled_n_qubit_h([0, 2, 3], [0, 0, 1]) | AND_n_qubits([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 3], [1, 1, 1]) | AND_n_qubits([1, 2, 3], [1, 1, 1])\n \n Q = controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 1, 1]) | controlled_n_qubit_h([0, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([1, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 0, 1, 0]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 1, 0, 0]) | AND_n_qubits([0, 1, 3], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 1, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 0, 1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 1, 0])\n\n R = controlled_n_qubit_h([0, 1, 2], [1, 1, 0]) | controlled_n_qubit_h([0, 1, 2], [0, 0, 0]) | controlled_n_qubit_h([0, 1, 3], [0, 1, 0]) | controlled_n_qubit_h([0, 2, 3], [0, 1, 1]) | AND_n_qubits([0, 1], [1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 1])\n\n S = controlled_n_qubit_h([1, 2, 3], [1, 0, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 2], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 0])\n\n\n if(signature[index] != \"x\" and signature[index] == \"1\"):\n P = P | 1\n elif(signature[index] != \"x\"):\n P = P & 0\n \n index = index + 1\n\n if(signature[index] != \"x\" and signature[index] == \"1\"):\n Q = Q | 1\n elif(signature[index] != \"x\"):\n Q = Q & 0\n \n index = index + 1\n\n if(signature[index] != \"x\" and signature[index] == \"1\"):\n R = R | 1\n elif(signature[index] != \"x\"):\n R = R & 0\n\n index = index + 1\n\n if(signature[index] != \"x\" and signature[index] == \"1\"):\n S = S | 1\n elif(signature[index] != \"x\"):\n S = S & 0\n\n index = index + 1\n\n state = \"\" + str(P) + str(Q) + str(R) + str(S)\n\n hidden_state = \"\"\n for j in range(len(state)):\n if(j in dont_care_list):\n hidden_state = hidden_state + \"x\"\n else:\n hidden_state = hidden_state + state[j]\n \n print(state + \" \" + hidden_state)\n recreated_signature = recreated_signature + hidden_state\n\n for _ in range(len(circuit.data)):\n circuit.data.pop(0)\n\n if(P == 1):\n circuit.x(quantum_register[0])\n \n if(Q == 1):\n circuit.x(quantum_register[1])\n\n if(R == 1):\n circuit.x(quantum_register[2])\n\n if(S == 1):\n circuit.x(quantum_register[3])\n print(recreated_signature)\n print(signature)\n if(recreated_signature == signature):\n print(\"ACCEPT\")\n else:\n print(\"REJECT\")",
"_____no_output_____"
],
[
"start = time.time()\nfor _ in range(len(circuit.data)):\n circuit.data.pop(0)\nrecipient(\"1011\", \"x10x10xxx10x1x1xxx101xx01xx1x01x1x0xx11x0x1xx1x0x0x0xx11\", \"1111\")\nfor _ in range(len(circuit.data)):\n circuit.data.pop(0)\nrecipient(\"1011\", \"x00x10xxx10x1x1xxx101xx01xx1x01x1x0xx11x0x1xx1x0x0x0xx11\", \"1111\")\nprint(time.time() - start)",
"Message: 1011\n0101 x10x\n1011 1xx1\n0101 x10x\n1111 1xx1\n1010 xx10\n1010 1xx0\n1011 1xx1\n0011 x01x\n1000 1x0x\n1110 x11x\n0110 0x1x\n0110 x1x0\n0010 x0x0\n1111 xx11\nx10x1xx1x10x1xx1xx101xx01xx1x01x1x0xx11x0x1xx1x0x0x0xx11\nx10x10xxx10x1x1xxx101xx01xx1x01x1x0xx11x0x1xx1x0x0x0xx11\nREJECT\nMessage: 1011\n0001 x00x\n1000 10xx\n1100 x10x\n1111 1x1x\n1010 xx10\n1010 1xx0\n1011 1xx1\n0011 x01x\n1000 1x0x\n1110 x11x\n0110 0x1x\n0110 x1x0\n0010 x0x0\n1111 xx11\nx00x10xxx10x1x1xxx101xx01xx1x01x1x0xx11x0x1xx1x0x0x0xx11\nx00x10xxx10x1x1xxx101xx01xx1x01x1x0xx11x0x1xx1x0x0x0xx11\nACCEPT\n11.85292911529541\n"
]
],
[
[
"# Scheme 2\n\nNon-transfer of x\n\nMore secure\n\nRequires one-time additional sharing of a dictionary\n\nFrom the two dictionaries is inferred the total number of output states\n(in the cell below, 2 + 2 = 4)\n",
"_____no_output_____"
]
],
[
[
"constant_index_dictionary = {}\nconstant_index_dictionary['0000'] = [0, 2]\nconstant_index_dictionary['0001'] = [2, 3]\nconstant_index_dictionary['0010'] = [0, 1]\nconstant_index_dictionary['0011'] = [1, 3]\nconstant_index_dictionary['0100'] = [2, 3]\nconstant_index_dictionary['0101'] = [1, 2]\nconstant_index_dictionary['0110'] = [0, 2]\nconstant_index_dictionary['0111'] = [0, 2]\nconstant_index_dictionary['1000'] = [0, 3]\nconstant_index_dictionary['1001'] = [0, 1]\nconstant_index_dictionary['1010'] = [1, 2]\nconstant_index_dictionary['1011'] = [0, 3]\nconstant_index_dictionary['1100'] = [1, 3]\nconstant_index_dictionary['1101'] = [2, 3]\nconstant_index_dictionary['1110'] = [1, 3]\nconstant_index_dictionary['1111'] = [0, 1]\n\n# additional dictionary to be shared\n\nhidden_index_dictionary = {}\nhidden_index_dictionary['0000'] = [1, 3]\nhidden_index_dictionary['0001'] = [0, 1]\nhidden_index_dictionary['0010'] = [2, 3]\nhidden_index_dictionary['0011'] = [0, 2]\nhidden_index_dictionary['0100'] = [0, 1]\nhidden_index_dictionary['0101'] = [0, 3]\nhidden_index_dictionary['0110'] = [1, 3]\nhidden_index_dictionary['0111'] = [1, 3]\nhidden_index_dictionary['1000'] = [1, 2]\nhidden_index_dictionary['1001'] = [2, 3]\nhidden_index_dictionary['1010'] = [0, 3]\nhidden_index_dictionary['1011'] = [1, 2]\nhidden_index_dictionary['1100'] = [0, 2]\nhidden_index_dictionary['1101'] = [0, 1]\nhidden_index_dictionary['1110'] = [0, 2]\nhidden_index_dictionary['1111'] = [2, 3]",
"_____no_output_____"
],
[
"import qiskit\nimport numpy as np\nimport time\n\nCLASSICAL_REGISTER_LENGTH = 5\nQUANTUM_REGISTER_LENGTH = 5\n\ncircuit_building_start_time = time.time()\nsimulator = qiskit.Aer.get_backend('qasm_simulator')\nclassical_register = qiskit.ClassicalRegister(CLASSICAL_REGISTER_LENGTH)\nquantum_register = qiskit.QuantumRegister(QUANTUM_REGISTER_LENGTH)\ncircuit = qiskit.QuantumCircuit(quantum_register, classical_register)\ncircuit_building_end_time = time.time()\n\n\nAND_gate_auxillary_qubit = QUANTUM_REGISTER_LENGTH - 1 # last qubit as the auxillary qubit\n\n'''\n Applies quantum AND operation to specified pair of qubits, stores the operation in AND_gate_auxillary_qubit,\n and stores the result in a classical register\n @PARAMS:\n qubit1: position of the first qubit\n qubit2: position of the second qubit\n qubit1_one: whether the first qubit is NOT\n qubit2_one: whether the second qubit is NOT\n classical_register_position: the classical register position to store the measurement of AND_gate_auxillary_qubit\n'''\ndef AND_2_qubit(qubit1, qubit2, qubit1_one, qubit2_one, classical_register_position):\n \n if(qubit1_one):\n circuit.x(quantum_register[qubit1])\n if(qubit2_one):\n circuit.x(quantum_register[qubit2])\n circuit.ccx(quantum_register[qubit1], quantum_register[qubit2], quantum_register[AND_gate_auxillary_qubit])\n circuit.measure(quantum_register[AND_gate_auxillary_qubit], classical_register[classical_register_position])\n if(qubit1_one):\n circuit.x(quantum_register[qubit1])\n if(qubit2_one):\n circuit.x(quantum_register[qubit2])\n circuit.reset(quantum_register[AND_gate_auxillary_qubit])\n\n'''\n Applies the AND gate operation on a list of n qubits\n @PARAMS:\n qubit_list: list of qubits to perform the operation on\n qubit_one_list: whether each of those qubits is NOT\n @RETURN:\n result of the n-qubit AND operation\n'''\ndef AND_n_qubits(qubit_list, qubit_one_list):\n \n length = len(qubit_list)\n if(length != len(qubit_one_list)):\n print(\"Incorrect dimensions\")\n return\n classical_register_index = 0 # where to store pairwise AND operation results\n\n # handling odd number of qubits by preprocessing the last qubit\n if(length % 2 != 0):\n if(qubit_one_list[length - 1] == 1):\n circuit.x(quantum_register[qubit_list[length-1]])\n circuit.cx(quantum_register[qubit_list[length - 1]], quantum_register[AND_gate_auxillary_qubit])\n circuit.measure(quantum_register[AND_gate_auxillary_qubit], classical_register[classical_register_index])\n circuit.reset(quantum_register[AND_gate_auxillary_qubit])\n classical_register_index = classical_register_index + 1\n if(qubit_one_list[length - 1] == 1):\n circuit.x(quantum_register[qubit_list[length-1]])\n length = length - 1\n\n\n for index in range(length - 1, 0, -2):\n AND_2_qubit(qubit_list[index], qubit_list[index - 1], qubit_one_list[index], qubit_one_list[index - 1], classical_register_index)\n classical_register_index = classical_register_index + 1\n \n job = qiskit.execute(circuit, simulator, shots=1)\n result = job.result()\n counts = str(result.get_counts())\n counts = counts[counts.find('\\'') + 1:]\n counts = counts[:counts.find('\\'')]\n output = 1\n for index in range(0, classical_register_index, 1):\n output = output & int(counts[CLASSICAL_REGISTER_LENGTH - 1 - index])\n \n return output\n\ndef controlled_n_qubit_h(qubit_list, qubit_one_list):\n output = AND_n_qubits(qubit_list, qubit_one_list)\n if(output == 1):\n circuit.h(quantum_register[AND_gate_auxillary_qubit])\n circuit.measure(quantum_register[AND_gate_auxillary_qubit], classical_register[0])\n circuit.reset(quantum_register[AND_gate_auxillary_qubit])\n job = qiskit.execute(circuit, simulator, shots=1)\n result = job.result()\n counts = str(result.get_counts())\n counts = counts[counts.find('\\'') + 1:]\n counts = counts[:counts.find('\\'')]\n return int(counts[len(counts) - 1])\n return 0\n\n'''\n the main circuit for the following truth table:\n A, B, C, D = binary representation input state for the robot\n P, Q, R, S = binary representation of the output state from the robot\n\n New circuit in register...\n'''\n\ndef main_circuit(STEPS, initial_state):\n signature = \"\"\n state = initial_state\n for _ in range(STEPS):\n\n dont_care_list = constant_index_dictionary[state]\n state = \"\"\n\n P = controlled_n_qubit_h([0, 1, 3], [1, 1, 0]) | controlled_n_qubit_h([1, 2], [0, 1]) | controlled_n_qubit_h([0, 2, 3], [0, 0, 1]) | AND_n_qubits([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 3], [1, 1, 1]) | AND_n_qubits([1, 2, 3], [1, 1, 1])\n \n Q = controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 1, 1]) | controlled_n_qubit_h([0, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([1, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 0, 1, 0]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 1, 0, 0]) | AND_n_qubits([0, 1, 3], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 1, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 0, 1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 1, 0])\n\n R = controlled_n_qubit_h([0, 1, 2], [1, 1, 0]) | controlled_n_qubit_h([0, 1, 2], [0, 0, 0]) | controlled_n_qubit_h([0, 1, 3], [0, 1, 0]) | controlled_n_qubit_h([0, 2, 3], [0, 1, 1]) | AND_n_qubits([0, 1], [1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 1])\n\n S = controlled_n_qubit_h([1, 2, 3], [1, 0, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 2], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 0])\n\n\n state = state + str(P) + str(Q) + str(R) + str(S) \n hidden_state = \"\"\n for j in range(len(state)):\n if(j in dont_care_list):\n pass\n else:\n hidden_state = hidden_state + state[j]\n \n print(state + \" \" + hidden_state)\n signature = signature + hidden_state\n\n for _ in range(len(circuit.data)):\n circuit.data.pop(0)\n\n if(P == 1):\n circuit.x(quantum_register[0])\n \n if(Q == 1):\n circuit.x(quantum_register[1])\n\n if(R == 1):\n circuit.x(quantum_register[2])\n\n if(S == 1):\n circuit.x(quantum_register[3])\n\n print(\"End state: \" + str(P) + str(Q) + str(R) + str(S) )\n print(\"Signature: \" + signature)\n \ndef initialise_starting_state(P, Q, R, S):\n if(P == 1):\n circuit.x(quantum_register[0])\n if(Q == 1):\n circuit.x(quantum_register[1])\n if(R == 1):\n circuit.x(quantum_register[2])\n if(S == 1):\n circuit.x(quantum_register[3])\n print(\"Message: \" + str(P) + str(Q) + str(R) + str(S))\n\ndef measure_time():\n\n total_time = 0\n for i in range(100):\n start_time = time.time()\n # output = AND_n_qubits([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1])\n output = controlled_n_qubit_h([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1])\n \n print(str(i) + \" \" + str(output))\n end_time = time.time()\n total_time = total_time + (end_time - start_time)\n print(\"Average time: \" + str(total_time/100))\n\n\nstart_time = time.time()\ninitialise_starting_state(0, 1, 0, 1) # message to be signed\nSTEPS = 10 # security parameter: length of the walk\nmain_circuit(STEPS, '0101') \n# measure_time()\nend_time = time.time()\nprint(\"Run in time \" + str(end_time - start_time))\nprint(circuit_building_end_time - circuit_building_start_time)",
"Message: 0101\n0110 00\n0010 00\n1101 01\n1001 10\n0100 00\n0110 01\n0010 00\n1100 00\n0101 00\n0110 00\nEnd state: 0110\nSignature: 00000110000100000000\nRun in time 2.812980890274048\n0.0001819133758544922\n"
],
[
"def recipient_initialise_starting_state(P, Q, R, S):\n if(P == \"1\"):\n circuit.x(quantum_register[0])\n if(Q == \"1\"):\n circuit.x(quantum_register[1])\n if(R == \"1\"):\n circuit.x(quantum_register[2])\n if(S == \"1\"):\n circuit.x(quantum_register[3])\n print(\"Message: \" + str(P) + str(Q) + str(R) + str(S))\n\ndef recipient(message, signature, end_state):\n \n # for every 2 bits, there are 2 additional hidden bits, by definition of the shared data structures\n\n STEPS = (2*len(signature))/len(end_state)\n STEPS = int(STEPS)\n index = 0\n recipient_initialise_starting_state(message[0], message[1], message[2], message[3])\n state = message\n recreated_signature = \"\"\n recreated_original_signature = \"\"\n for _ in range(STEPS):\n\n dont_care_list = constant_index_dictionary[state]\n hidden_index_list = hidden_index_dictionary[state]\n # print(state + \" \" + str(hidden_index_list))\n state = \"\"\n \n P = controlled_n_qubit_h([0, 1, 3], [1, 1, 0]) | controlled_n_qubit_h([1, 2], [0, 1]) | controlled_n_qubit_h([0, 2, 3], [0, 0, 1]) | AND_n_qubits([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 3], [1, 1, 1]) | AND_n_qubits([1, 2, 3], [1, 1, 1])\n \n Q = controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 1, 1]) | controlled_n_qubit_h([0, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([1, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 0, 1, 0]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 1, 0, 0]) | AND_n_qubits([0, 1, 3], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 1, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 0, 1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 1, 0])\n\n R = controlled_n_qubit_h([0, 1, 2], [1, 1, 0]) | controlled_n_qubit_h([0, 1, 2], [0, 0, 0]) | controlled_n_qubit_h([0, 1, 3], [0, 1, 0]) | controlled_n_qubit_h([0, 2, 3], [0, 1, 1]) | AND_n_qubits([0, 1], [1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 1])\n\n S = controlled_n_qubit_h([1, 2, 3], [1, 0, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 2], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 0])\n\n\n for i in range(len(hidden_index_list)):\n temp_index = hidden_index_list[i]\n if(temp_index == 0):\n if(signature[index] == '1'):\n P = P | 1\n else:\n P = P & 0\n elif(temp_index == 1):\n if(signature[index] == '1'):\n Q = Q | 1\n else:\n Q = Q & 0\n elif(temp_index == 2):\n if(signature[index] == '1'):\n R = R | 1\n else:\n R = R & 0\n elif(temp_index == 3):\n if(signature[index] == '1'):\n S = S | 1\n else:\n S = S & 0\n index = index + 1\n\n state = \"\" + str(P) + str(Q) + str(R) + str(S)\n\n hidden_state = \"\"\n for j in range(len(state)):\n if(j in dont_care_list):\n # hidden_state = hidden_state + \"x\"\n pass\n else:\n hidden_state = hidden_state + state[j]\n \n print(state + \" \" + hidden_state)\n recreated_signature = recreated_signature + hidden_state\n\n for _ in range(len(circuit.data)):\n circuit.data.pop(0)\n\n if(P == 1):\n circuit.x(quantum_register[0])\n \n if(Q == 1):\n circuit.x(quantum_register[1])\n\n if(R == 1):\n circuit.x(quantum_register[2])\n\n if(S == 1):\n circuit.x(quantum_register[3])\n\n if(recreated_signature == signature and end_state == state):\n print(\"ACCEPT\")\n else:\n print(\"REJECT\")",
"_____no_output_____"
],
[
"start = time.time()\n# for _ in range(len(circuit.data)):\n# circuit.data.pop(0)\n# recipient(\"0101\", \"10011010111000010011\", \"1111\")\nfor _ in range(len(circuit.data)):\n circuit.data.pop(0)\nrecipient(\"0101\", \"1000110000100000000\", \"0110\")\nprint(time.time() - start)",
"Message: 0101\n1110 10\n0100 00\n1110 11\n0100 00\n0010 00\n1110 10\n0100 00\n0010 00\n1100 00\nREJECT\n2.47572660446167\n"
]
],
[
[
"# k-Path dependent scheme",
"_____no_output_____"
]
],
[
[
"constant_index_dictionary = {}\nconstant_index_dictionary['0000'] = [0, 2]\nconstant_index_dictionary['0001'] = [2, 3]\nconstant_index_dictionary['0010'] = [0, 1]\nconstant_index_dictionary['0011'] = [1, 3]\nconstant_index_dictionary['0100'] = [2, 3]\nconstant_index_dictionary['0101'] = [1, 2]\nconstant_index_dictionary['0110'] = [0, 2]\nconstant_index_dictionary['0111'] = [0, 2]\nconstant_index_dictionary['1000'] = [0, 3]\nconstant_index_dictionary['1001'] = [0, 1]\nconstant_index_dictionary['1010'] = [1, 2]\nconstant_index_dictionary['1011'] = [0, 3]\nconstant_index_dictionary['1100'] = [1, 3]\nconstant_index_dictionary['1101'] = [2, 3]\nconstant_index_dictionary['1110'] = [1, 3]\nconstant_index_dictionary['1111'] = [0, 1]",
"_____no_output_____"
],
[
"import qiskit\nimport numpy as np\nimport time\n\nCLASSICAL_REGISTER_LENGTH = 5\nQUANTUM_REGISTER_LENGTH = 5\n\ncircuit_building_start_time = time.time()\nsimulator = qiskit.Aer.get_backend('qasm_simulator')\nclassical_register = qiskit.ClassicalRegister(CLASSICAL_REGISTER_LENGTH)\nquantum_register = qiskit.QuantumRegister(QUANTUM_REGISTER_LENGTH)\ncircuit = qiskit.QuantumCircuit(quantum_register, classical_register)\ncircuit_building_end_time = time.time()\n\n\nAND_gate_auxillary_qubit = QUANTUM_REGISTER_LENGTH - 1 # last qubit as the auxillary qubit\n\n'''\n Applies quantum AND operation to specified pair of qubits, stores the operation in AND_gate_auxillary_qubit,\n and stores the result in a classical register\n @PARAMS:\n qubit1: position of the first qubit\n qubit2: position of the second qubit\n qubit1_one: whether the first qubit is NOT\n qubit2_one: whether the second qubit is NOT\n classical_register_position: the classical register position to store the measurement of AND_gate_auxillary_qubit\n'''\ndef AND_2_qubit(qubit1, qubit2, qubit1_one, qubit2_one, classical_register_position):\n \n if(qubit1_one):\n circuit.x(quantum_register[qubit1])\n if(qubit2_one):\n circuit.x(quantum_register[qubit2])\n circuit.ccx(quantum_register[qubit1], quantum_register[qubit2], quantum_register[AND_gate_auxillary_qubit])\n circuit.measure(quantum_register[AND_gate_auxillary_qubit], classical_register[classical_register_position])\n if(qubit1_one):\n circuit.x(quantum_register[qubit1])\n if(qubit2_one):\n circuit.x(quantum_register[qubit2])\n circuit.reset(quantum_register[AND_gate_auxillary_qubit])\n\n'''\n Applies the AND gate operation on a list of n qubits\n @PARAMS:\n qubit_list: list of qubits to perform the operation on\n qubit_one_list: whether each of those qubits is NOT\n @RETURN:\n result of the n-qubit AND operation\n'''\ndef AND_n_qubits(qubit_list, qubit_one_list):\n \n length = len(qubit_list)\n if(length != len(qubit_one_list)):\n print(\"Incorrect dimensions\")\n return\n classical_register_index = 0 # where to store pairwise AND operation results\n\n # handling odd number of qubits by preprocessing the last qubit\n if(length % 2 != 0):\n if(qubit_one_list[length - 1] == 1):\n circuit.x(quantum_register[qubit_list[length-1]])\n circuit.cx(quantum_register[qubit_list[length - 1]], quantum_register[AND_gate_auxillary_qubit])\n circuit.measure(quantum_register[AND_gate_auxillary_qubit], classical_register[classical_register_index])\n circuit.reset(quantum_register[AND_gate_auxillary_qubit])\n classical_register_index = classical_register_index + 1\n if(qubit_one_list[length - 1] == 1):\n circuit.x(quantum_register[qubit_list[length-1]])\n length = length - 1\n\n\n for index in range(length - 1, 0, -2):\n AND_2_qubit(qubit_list[index], qubit_list[index - 1], qubit_one_list[index], qubit_one_list[index - 1], classical_register_index)\n classical_register_index = classical_register_index + 1\n \n job = qiskit.execute(circuit, simulator, shots=1)\n result = job.result()\n counts = str(result.get_counts())\n counts = counts[counts.find('\\'') + 1:]\n counts = counts[:counts.find('\\'')]\n output = 1\n for index in range(0, classical_register_index, 1):\n output = output & int(counts[CLASSICAL_REGISTER_LENGTH - 1 - index])\n \n return output\n\ndef controlled_n_qubit_h(qubit_list, qubit_one_list):\n output = AND_n_qubits(qubit_list, qubit_one_list)\n if(output == 1):\n circuit.h(quantum_register[AND_gate_auxillary_qubit])\n circuit.measure(quantum_register[AND_gate_auxillary_qubit], classical_register[0])\n circuit.reset(quantum_register[AND_gate_auxillary_qubit])\n job = qiskit.execute(circuit, simulator, shots=1)\n result = job.result()\n counts = str(result.get_counts())\n counts = counts[counts.find('\\'') + 1:]\n counts = counts[:counts.find('\\'')]\n return int(counts[len(counts) - 1])\n return 0\n\n'''\n the main circuit for the following truth table:\n A, B, C, D = binary representation input state for the robot\n P, Q, R, S = binary representation of the output state from the robot\n\n New circuit in register...\n'''\n\ndef main_circuit(STEPS, initial_state):\n signature = \"\"\n state = initial_state\n used_states = []\n step = 0\n rollback_count = 0\n while True:\n if(step == STEPS):\n break\n dont_care_list = constant_index_dictionary[state]\n rollback_state = state\n if(state not in used_states):\n used_states.append(state)\n state = \"\"\n\n P = controlled_n_qubit_h([0, 1, 3], [1, 1, 0]) | controlled_n_qubit_h([1, 2], [0, 1]) | controlled_n_qubit_h([0, 2, 3], [0, 0, 1]) | AND_n_qubits([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 3], [1, 1, 1]) | AND_n_qubits([1, 2, 3], [1, 1, 1])\n \n Q = controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 1, 1]) | controlled_n_qubit_h([0, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([1, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 0, 1, 0]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 1, 0, 0]) | AND_n_qubits([0, 1, 3], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 1, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 0, 1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 1, 0])\n\n R = controlled_n_qubit_h([0, 1, 2], [1, 1, 0]) | controlled_n_qubit_h([0, 1, 2], [0, 0, 0]) | controlled_n_qubit_h([0, 1, 3], [0, 1, 0]) | controlled_n_qubit_h([0, 2, 3], [0, 1, 1]) | AND_n_qubits([0, 1], [1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 1])\n\n S = controlled_n_qubit_h([1, 2, 3], [1, 0, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 2], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 0])\n\n state = state + str(P) + str(Q) + str(R) + str(S)\n if(state in used_states):\n rollback_count = rollback_count + 1\n if(rollback_count == (len(initial_state) + 10)):\n print(\"Aborting.\")\n return \"ABORT\"\n P = rollback_state[0]\n Q = rollback_state[1]\n R = rollback_state[2]\n S = rollback_state[3]\n state = rollback_state\n for _ in range(len(circuit.data)):\n circuit.data.pop(0)\n\n if(P == '1'):\n print(\"Rollback reset\")\n circuit.x(quantum_register[0])\n \n if(Q == '1'):\n print(\"Rollback reset\")\n circuit.x(quantum_register[1])\n\n if(R == '1'):\n print(\"Rollback reset\")\n circuit.x(quantum_register[2])\n\n if(S == '1'):\n print(\"Rollback reset\")\n circuit.x(quantum_register[3])\n print(\"Rolling back\")\n continue\n\n step = step + 1 \n rollback = 0\n hidden_state = \"\"\n for j in range(len(state)):\n if(j in dont_care_list):\n hidden_state = hidden_state + \"x\"\n else:\n hidden_state = hidden_state + state[j]\n \n signature = signature + hidden_state\n # print(state + \" \" + hidden_state)\n \n for _ in range(len(circuit.data)):\n circuit.data.pop(0)\n\n if(P == 1):\n circuit.x(quantum_register[0])\n \n if(Q == 1):\n circuit.x(quantum_register[1])\n\n if(R == 1):\n circuit.x(quantum_register[2])\n\n if(S == 1):\n circuit.x(quantum_register[3])\n return signature\n\n \ndef initialise_starting_state(P, Q, R, S):\n for _ in range(len(circuit.data)):\n circuit.data.pop(0)\n if(P == 1):\n circuit.x(quantum_register[0])\n if(Q == 1):\n circuit.x(quantum_register[1])\n if(R == 1):\n circuit.x(quantum_register[2])\n if(S == 1):\n circuit.x(quantum_register[3])\n print(\"Message: \" + str(P) + str(Q) + str(R) + str(S))\n\ndef measure_time():\n\n total_time = 0\n for i in range(100):\n start_time = time.time()\n # output = AND_n_qubits([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1])\n output = controlled_n_qubit_h([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1])\n \n print(str(i) + \" \" + str(output))\n end_time = time.time()\n total_time = total_time + (end_time - start_time)\n print(\"Average time: \" + str(total_time/100))\n\n",
"_____no_output_____"
]
],
[
[
"Creating a long message\n\n100 *bits*",
"_____no_output_____"
]
],
[
[
"def create_random_message(NUMBER_OF_BITS):\n message = \"\"\n c = qiskit.ClassicalRegister(1)\n q = qiskit.QuantumRegister(1)\n s = qiskit.Aer.get_backend('qasm_simulator')\n\n for i in range(NUMBER_OF_BITS):\n print(i)\n random_circuit = qiskit.QuantumCircuit(q, c)\n random_circuit.h(q[0])\n random_circuit.measure(q[0], c[0])\n\n\n job = qiskit.execute(random_circuit, s, shots=1)\n result = job.result()\n counts = str(result.get_counts())\n counts = counts[counts.find('\\'') + 1:]\n counts = counts[:counts.find('\\'')]\n message = message + counts\n\n print(message)\n\n\ncreate_random_message(100)",
"0\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\n40\n41\n42\n43\n44\n45\n46\n47\n48\n49\n50\n51\n52\n53\n54\n55\n56\n57\n58\n59\n60\n61\n62\n63\n64\n65\n66\n67\n68\n69\n70\n71\n72\n73\n74\n75\n76\n77\n78\n79\n80\n81\n82\n83\n84\n85\n86\n87\n88\n89\n90\n91\n92\n93\n94\n95\n96\n97\n98\n99\n1011000001011010110011111011011100111001000010001111011101101100010100100011010010111000110101100011\n"
]
],
[
[
"Signing a long message",
"_____no_output_____"
]
],
[
[
"def sign_message(message):\n signature = \"\"\n ITER = int(len(message)/4)\n start_time = time.time()\n STEPS = 5 # security parameter: length of the walk\n iter = 0\n while True:\n if(iter == ITER):\n break\n state = message[0:4]\n initialise_starting_state(int(state[0]), int(state[1]), int(state[2]), int(state[3]))\n return_signature = main_circuit(STEPS, state) \n if(return_signature == \"ABORT\"):\n print(\"Rerun\")\n continue\n iter = iter + 1\n signature = signature + return_signature\n message = message[4:]\n end_time = time.time()\n print(\"Run in time \" + str(end_time - start_time))\n \n print(signature)\n\nsign_message('1011000001011010110011111011011100111001000010001111011101101100010100100011010010111000110101100011')",
"Message: 1011\nRollback reset\nRollback reset\nRollback reset\nRollback reset\nRolling back\nMessage: 0000\nRollback reset\nRollback reset\nRolling back\nMessage: 0101\nMessage: 1010\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRolling back\nMessage: 1100\nRollback reset\nRolling back\nMessage: 1111\nRollback reset\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRollback reset\nRolling back\nMessage: 1011\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRolling back\nMessage: 0111\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRolling back\nMessage: 0011\nRollback reset\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRolling back\nMessage: 1001\nMessage: 0000\nRollback reset\nRollback reset\nRollback reset\nRolling back\nMessage: 1000\nMessage: 1111\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRollback reset\nRolling back\nMessage: 0111\nMessage: 0110\nMessage: 1100\nRollback reset\nRollback reset\nRolling back\nMessage: 0101\nMessage: 0010\nRollback reset\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRollback reset\nRolling back\nMessage: 0011\nRollback reset\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRollback reset\nRolling back\nMessage: 0100\nMessage: 1011\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRollback reset\nRolling back\nMessage: 1000\nMessage: 1101\nMessage: 0110\nRollback reset\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRollback reset\nRolling back\nRollback reset\nRolling back\nRollback reset\nRolling back\nRollback reset\nRolling back\nRollback reset\nRollback reset\nRolling back\nMessage: 0011\nRun in time 74.02919340133667\nx11xx1x1xx01xx10x0x1x1x01x0x10xxxx10x1x10xx1x1x1xx11x00x00xx0xx0xx11xx00x10x1x0x0x0x1xx1xx00x11x0x0xxx11x11xx1x00x1xx0x1x01x1x0xx01x0xx0xx01x1x1xx00x10x0x0x1xx00x0xx1x101xx0xx0x1x1xx10x1x1x0x1x01x1x0xx1x101xx1xx1xx00x10xx01x1xx1x10x0xx0x0x0xx01xx10x1x1x0x1x00xx0x01xx1x00x11xx1x1xx0x10x0xx1x01x0x10xx1x1xxx00x01x0xx10x1x1xx00x1xx0x10x1xxx11xx0100xx10xxx11x0x0x0x1xxx101x0x1x1xxx0010xx0xx10x1xxx11xx01x10x0xx0x0x0xx0110xxx01x0xx10x0xx0x1xx0000xx10xxx11x0x1xx1x1x0x0xx100x0x10xx0xx10x1xxx100x1xx1x1x1x1\n"
],
[
"print(len('x00x10xxx10x1x1xxx01'))",
"20\n"
],
[
"def recipient_initialise_starting_state(P, Q, R, S):\n for _ in range(len(circuit.data)):\n circuit.data.pop(0)\n if(P == \"1\"):\n circuit.x(quantum_register[0])\n if(Q == \"1\"):\n circuit.x(quantum_register[1])\n if(R == \"1\"):\n circuit.x(quantum_register[2])\n if(S == \"1\"):\n circuit.x(quantum_register[3])\n print(\"Message: \" + str(P) + str(Q) + str(R) + str(S))\n\ndef recipient(message, signature, end_state):\n STEPS = len(signature)/len(end_state)\n STEPS = int(STEPS)\n index = 0\n recipient_initialise_starting_state(message[0], message[1], message[2], message[3])\n state = message\n recreated_signature = \"\"\n for _ in range(STEPS):\n\n dont_care_list = constant_index_dictionary[state]\n state = \"\"\n \n P = controlled_n_qubit_h([0, 1, 3], [1, 1, 0]) | controlled_n_qubit_h([1, 2], [0, 1]) | controlled_n_qubit_h([0, 2, 3], [0, 0, 1]) | AND_n_qubits([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 3], [1, 1, 1]) | AND_n_qubits([1, 2, 3], [1, 1, 1])\n \n Q = controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 1, 1]) | controlled_n_qubit_h([0, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([1, 2, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 0, 1, 0]) | controlled_n_qubit_h([0, 1, 2, 3], [0, 1, 0, 0]) | AND_n_qubits([0, 1, 3], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 1, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [1, 0, 1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 1, 0])\n\n R = controlled_n_qubit_h([0, 1, 2], [1, 1, 0]) | controlled_n_qubit_h([0, 1, 2], [0, 0, 0]) | controlled_n_qubit_h([0, 1, 3], [0, 1, 0]) | controlled_n_qubit_h([0, 2, 3], [0, 1, 1]) | AND_n_qubits([0, 1], [1, 0]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 1])\n\n S = controlled_n_qubit_h([1, 2, 3], [1, 0, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 1, 1]) | controlled_n_qubit_h([0, 1, 3], [1, 0, 0]) | controlled_n_qubit_h([0, 1, 2], [1, 0, 0]) | controlled_n_qubit_h([1, 2, 3], [0, 0, 0]) | AND_n_qubits([0, 1, 2], [0, 0, 1]) | AND_n_qubits([0, 1, 2, 3], [0, 1, 0, 0])\n\n\n if(signature[index] != \"x\" and signature[index] == \"1\"):\n P = P | 1\n elif(signature[index] != \"x\"):\n P = P & 0\n \n index = index + 1\n\n if(signature[index] != \"x\" and signature[index] == \"1\"):\n Q = Q | 1\n elif(signature[index] != \"x\"):\n Q = Q & 0\n \n index = index + 1\n\n if(signature[index] != \"x\" and signature[index] == \"1\"):\n R = R | 1\n elif(signature[index] != \"x\"):\n R = R & 0\n\n index = index + 1\n\n if(signature[index] != \"x\" and signature[index] == \"1\"):\n S = S | 1\n elif(signature[index] != \"x\"):\n S = S & 0\n\n index = index + 1\n\n state = \"\" + str(P) + str(Q) + str(R) + str(S)\n\n hidden_state = \"\"\n for j in range(len(state)):\n if(j in dont_care_list):\n hidden_state = hidden_state + \"x\"\n else:\n hidden_state = hidden_state + state[j]\n \n recreated_signature = recreated_signature + hidden_state\n print(state + \" \" + hidden_state)\n for _ in range(len(circuit.data)):\n circuit.data.pop(0)\n\n if(P == 1):\n circuit.x(quantum_register[0])\n \n if(Q == 1):\n circuit.x(quantum_register[1])\n\n if(R == 1):\n circuit.x(quantum_register[2])\n\n if(S == 1):\n circuit.x(quantum_register[3])\n print(recreated_signature)\n print(signature)\n if(recreated_signature == signature):\n print(\"ACCEPT\")\n else:\n print(\"REJECT\")\n return recreated_signature\n",
"_____no_output_____"
],
[
"import time\nstart = time.time()\nfor _ in range(len(circuit.data)):\n circuit.data.pop(0)\nSTEPS = int(len('1011000001011010110011111011011100111001000010001111011101101100010100100011010010111000110101100011') / 4)\nmessage = '1011000001011010110011111011011100111001000010001111011101101100010100100011010010111000110101100011'\nsignature = 'x11xx1x1xx01xx10x0x1x1x01x0x10xxxx10x1x10xx1x1x1xx11x00x00xx0xx0xx11xx00x10x1x0x0x0x1xx1xx00x11x0x0xxx11x11xx1x00x1xx0x1x01x1x0xx01x0xx0xx01x1x1xx00x10x0x0x1xx00x0xx1x101xx0xx0x1x1xx10x1x1x0x1x01x1x0xx1x101xx1xx1xx00x10xx01x1xx1x10x0xx0x0x0xx01xx10x1x1x0x1x00xx0x01xx1x00x11xx1x1xx0x10x0xx1x01x0x10xx1x1xxx00x01x0xx10x1x1xx00x1xx0x10x1xxx11xx0100xx10xxx11x0x0x0x1xxx101x0x1x1xxx0010xx0xx10x1xxx11xx01x10x0xx0x0x0xx0110xxx01x0xx10x0xx0x1xx0000xx10xxx11x0x1xx1x1x0x0xx100x0x10xx0xx10x1xxx100x1xx1x1x1x1'\ntemp_signature = signature\nk = int(len(signature)/len(message))\nend_index = k*4\nrecipient_signature = \"\"\nfor _ in range(STEPS):\n start_state = message[0:4]\n message = message[4:]\n mess_signature = signature[0:end_index]\n signature = signature[end_index:]\n recipient_signature = recipient_signature + recipient(start_state, mess_signature, '0000')\n\n\nif(recipient_signature == temp_signature):\n print(\"ACCEPT\")\nelse:\n print(\"REJECT\")\nprint(time.time() - start)",
"Message: 1011\n0011 x01x\n0101 0x0x\n0101 0xx1\n1110 1xx0\n1011 1x1x\nx01x0x0x0xx11xx01x1x\nx01xx1x1xx01xx10x0x1\nREJECT\nMessage: 0000\n1100 x1x0\n1101 1x0x\n1001 10xx\n0110 xx10\n0111 x1x1\nx1x01x0x10xxxx10x1x1\nx1x01x0x10xxxx10x1x1\nACCEPT\nMessage: 0101\n0111 0xx1\n1111 x1x1\n1011 xx11\n0001 x00x\n0000 00xx\n0xx1x1x1xx11x00x00xx\n0xx1x1x1xx11x00x00xx\nACCEPT\nMessage: 1010\n0010 0xx0\n1111 xx11\n1000 xx00\n1100 x10x\n1101 1x0x\n0xx0xx11xx00x10x1x0x\n0xx0xx11xx00x10x1x0x\nACCEPT\nMessage: 1100\n0101 0x0x\n1111 1xx1\n1000 xx00\n1110 x11x\n0100 0x0x\n0x0x1xx1xx00x11x0x0x\n0x0x1xx1xx00x11x0x0x\nACCEPT\nMessage: 1111\n1011 xx11\n0111 x11x\n1110 x1x0\n0110 0x1x\n0011 x0x1\nxx11x11xx1x00x1xx0x1\nxx11x11xx1x00x1xx0x1\nACCEPT\nMessage: 1011\n0011 x01x\n1000 1x0x\n1010 x01x\n0010 0xx0\n1101 xx01\nx01x1x0xx01x0xx0xx01\nx01x1x0xx01x0xx0xx01\nACCEPT\nMessage: 0111\n1111 x1x1\n1000 xx00\n1100 x10x\n0101 0x0x\n1110 1xx0\nx1x1xx00x10x0x0x1xx0\nx1x1xx00x10x0x0x1xx0\nACCEPT\nMessage: 0011\n0000 0x0x\n1101 x1x1\n0101 01xx\n0110 0xx0\n0111 x1x1\n0x0xx1x101xx0xx0x1x1\n0x0xx1x101xx0xx0x1x1\nACCEPT\nMessage: 1001\n0110 xx10\n0111 x1x1\n1011 x0x1\n0011 x01x\n1000 1x0x\nxx10x1x1x0x1x01x1x0x\nxx10x1x1x0x1x01x1x0x\nACCEPT\nMessage: 0000\n1101 x1x1\n"
],
[
"1111 1xx1 1111 1xx1\n1011 xx11 1011 xx11\nRolling back\n1101 x10x 0101 x10x\n0001 00xx 0010 0xx0\n1000 10xx 1010 xx10\n\n\n\n\n\n",
"_____no_output_____"
],
[
"print(recipient('1011', 'x11xxx01xx10x0x0xx0100xx11xx1x1xxx11x11x', '0000'))\nprint(recipient('1001', 'xx001x1xxx01xx0011xx1x0x0x1xx1x1xx100xx0', '0000'))",
"Message: 1011\nx11x0x0x1xx00x1xxx0100xx11xx1x1xxx11x11x\nx11xxx01xx10x0x0xx0100xx11xx1x1xxx11x11x\nREJECT\nx11x0x0x1xx00x1xxx0100xx11xx1x1xxx11x11x\nMessage: 1001\nxx00x11x0x0x0xx011xx1x0x0x1xx1x1xx100xx0\nxx001x1xxx01xx0011xx1x0x0x1xx1x1xx100xx0\nREJECT\nxx00x11x0x0x0xx011xx1x0x0x1xx1x1xx100xx0\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0da06f6de7b21692556f59827c5bd6f73bdabea | 31,086 | ipynb | Jupyter Notebook | 01_LLA/02_LLA_run_merge.ipynb | leylabmpi/16S-arc_vertebrate_paper | e1d69e010eb53d26c1ee5a13f0eb530a1580986b | [
"MIT"
] | null | null | null | 01_LLA/02_LLA_run_merge.ipynb | leylabmpi/16S-arc_vertebrate_paper | e1d69e010eb53d26c1ee5a13f0eb530a1580986b | [
"MIT"
] | null | null | null | 01_LLA/02_LLA_run_merge.ipynb | leylabmpi/16S-arc_vertebrate_paper | e1d69e010eb53d26c1ee5a13f0eb530a1580986b | [
"MIT"
] | null | null | null | 39.549618 | 2,208 | 0.613138 | [
[
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Goal\" data-toc-modified-id=\"Goal-1\"><span class=\"toc-item-num\">1 </span>Goal</a></span></li><li><span><a href=\"#Var\" data-toc-modified-id=\"Var-2\"><span class=\"toc-item-num\">2 </span>Var</a></span></li><li><span><a href=\"#Init\" data-toc-modified-id=\"Init-3\"><span class=\"toc-item-num\">3 </span>Init</a></span></li><li><span><a href=\"#Merging\" data-toc-modified-id=\"Merging-4\"><span class=\"toc-item-num\">4 </span>Merging</a></span><ul class=\"toc-item\"><li><span><a href=\"#SV-artifact\" data-toc-modified-id=\"SV-artifact-4.1\"><span class=\"toc-item-num\">4.1 </span>SV artifact</a></span></li><li><span><a href=\"#rep-seqs\" data-toc-modified-id=\"rep-seqs-4.2\"><span class=\"toc-item-num\">4.2 </span>rep-seqs</a></span></li><li><span><a href=\"#Taxonomy\" data-toc-modified-id=\"Taxonomy-4.3\"><span class=\"toc-item-num\">4.3 </span>Taxonomy</a></span></li></ul></li><li><span><a href=\"#Alignment\" data-toc-modified-id=\"Alignment-5\"><span class=\"toc-item-num\">5 </span>Alignment</a></span><ul class=\"toc-item\"><li><span><a href=\"#Creating-alignment\" data-toc-modified-id=\"Creating-alignment-5.1\"><span class=\"toc-item-num\">5.1 </span>Creating alignment</a></span></li><li><span><a href=\"#Masking-alignment\" data-toc-modified-id=\"Masking-alignment-5.2\"><span class=\"toc-item-num\">5.2 </span>Masking alignment</a></span></li></ul></li><li><span><a href=\"#Phylogeny\" data-toc-modified-id=\"Phylogeny-6\"><span class=\"toc-item-num\">6 </span>Phylogeny</a></span><ul class=\"toc-item\"><li><span><a href=\"#Unrooted-tree\" data-toc-modified-id=\"Unrooted-tree-6.1\"><span class=\"toc-item-num\">6.1 </span>Unrooted tree</a></span></li><li><span><a href=\"#Rooted-tree\" data-toc-modified-id=\"Rooted-tree-6.2\"><span class=\"toc-item-num\">6.2 </span>Rooted tree</a></span></li></ul></li><li><span><a href=\"#sessionInfo\" data-toc-modified-id=\"sessionInfo-7\"><span class=\"toc-item-num\">7 </span>sessionInfo</a></span></li></ul></div>",
"_____no_output_____"
],
[
"# Goal\n\n* Merge results from all per-MiSeq-run `LLA` jobs\n* Merging feature tables for multiple sequencing runs:\n * MiSeq-Run0116\n * MiSeq-Run0122\n * MiSeq-Run0126\n * **NOT** MiSeq-Run187 (failed run)\n * MiSeq-run0189\n* Then running standard processing:\n * dataset summary\n * taxonomy\n * phylogeny",
"_____no_output_____"
],
[
"# Var",
"_____no_output_____"
]
],
[
[
"work_dir = '/ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA/merged/'\n\nrun_dir = '/ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA/'\nmiseq_runs = c('Run0116', 'Run0122', 'Run0126', 'Run0189', 'Run0190')\n\n# params\nconda_env = 'qiime2-2019.10'\nthreads = 24",
"_____no_output_____"
]
],
[
[
"# Init",
"_____no_output_____"
]
],
[
[
"library(dplyr)\nlibrary(tidyr)\nlibrary(ggplot2)\nlibrary(LeyLabRMisc)",
"\nAttaching package: ‘dplyr’\n\n\nThe following objects are masked from ‘package:stats’:\n\n filter, lag\n\n\nThe following objects are masked from ‘package:base’:\n\n intersect, setdiff, setequal, union\n\n\n"
],
[
"df.dims()\nmake_dir(work_dir)",
"Directory already exists: /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA/merged/ \n"
]
],
[
[
"# Merging",
"_____no_output_____"
],
[
"## SV artifact",
"_____no_output_____"
]
],
[
[
"# artifacts for individual runs\nP = file.path(run_dir, '{run}', 'table_merged_filt.qza')\nruns = miseq_runs %>% as.list %>%\n lapply(function(x) glue::glue(P, run=x))\n \nruns",
"_____no_output_____"
],
[
"# function to merge tables\nmerge_tables = function(in_tbls, out_tbl, conda_env){\n cmd = 'qiime feature-table merge --i-tables {in_tbls} --o-merged-table {out_tbl} --p-overlap-method sum'\n cmd = glue::glue(cmd, in_tbls=in_tbls, out_tbl=out_tbl)\n cat('CMD:', cmd, '\\n')\n ret = bash_job(cmd, conda_env=conda_env, stderr=TRUE)\n cat(ret, '\\n')\n return(out_tbl)\n}",
"_____no_output_____"
],
[
"# merging\ntable_merged_file = file.path(work_dir, 'table_merged_filt.qza')\ntable_merged_file = merge_tables(paste(runs, collapse=' '), table_merged_file, conda_env)\ncat('Output file:', table_merged_file, '\\n')",
"CMD: qiime feature-table merge --i-tables /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA//Run0116/table_merged_filt.qza /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA//Run0122/table_merged_filt.qza /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA//Run0126/table_merged_filt.qza /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA//Run0189/table_merged_filt.qza /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA//Run0190/table_merged_filt.qza --o-merged-table /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA/merged//table_merged_filt.qza --p-overlap-method sum \nSaved FeatureTable[Frequency] to: /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA/merged//table_merged_filt.qza \nOutput file: /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA/merged//table_merged_filt.qza \n"
]
],
[
[
"## rep-seqs",
"_____no_output_____"
]
],
[
[
"# artifacts for individual runs\nP = file.path(run_dir, '{run}', 'rep-seqs_merged_filt.qza')\nruns = miseq_runs %>% as.list %>%\n lapply(function(x) glue::glue(P, run=x))\n \nruns",
"_____no_output_____"
],
[
"# function to merge seqs \nmerge_seqs = function(in_seqs, out_seq, conda_env){\n cmd = 'qiime feature-table merge-seqs --i-data {in_seqs} --o-merged-data {out_seq}'\n cmd = glue::glue(cmd, in_seqs=in_seqs, out_tbl=out_seq)\n cat('CMD:', cmd, '\\n')\n ret = bash_job(cmd, conda_env=conda_env, stderr=TRUE)\n cat(ret, '\\n')\n return(out_seq)\n}",
"_____no_output_____"
],
[
"# merging\nseqs_merged_file = file.path(work_dir, 'rep-seqs_merged_filt.qza')\nseqs_merged_file = merge_seqs(paste(runs, collapse=' '), seqs_merged_file, conda_env)\ncat('Output file:', seqs_merged_file, '\\n')",
"CMD: qiime feature-table merge-seqs --i-data /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA//Run0116/rep-seqs_merged_filt.qza /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA//Run0122/rep-seqs_merged_filt.qza /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA//Run0126/rep-seqs_merged_filt.qza /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA//Run0189/rep-seqs_merged_filt.qza /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA//Run0190/rep-seqs_merged_filt.qza --o-merged-data /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA/merged//rep-seqs_merged_filt.qza \nSaved FeatureData[Sequence] to: /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA/merged//rep-seqs_merged_filt.qza \nOutput file: /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA/merged//rep-seqs_merged_filt.qza \n"
]
],
[
[
"## Taxonomy",
"_____no_output_____"
]
],
[
[
"# artifacts for individual runs\nP = file.path(run_dir, '{run}', 'taxonomy.qza')\nruns = miseq_runs %>% as.list %>%\n lapply(function(x) glue::glue(P, run=x))\n \nruns",
"_____no_output_____"
],
[
"# function to merge tax \nmerge_tax = function(in_taxs, out_tax, conda_env){\n cmd = 'qiime feature-table merge-taxa --i-data {in_seqs} --o-merged-data {out_tax}'\n cmd = glue::glue(cmd, in_seqs=in_taxs, out_tbl=out_tax)\n cat('CMD:', cmd, '\\n')\n ret = bash_job(cmd, conda_env=conda_env, stderr=TRUE)\n cat(ret, '\\n')\n return(out_tax)\n}",
"_____no_output_____"
],
[
"# merging\ntax_merged_file = file.path(work_dir, 'taxonomy.qza')\ntax_merged_file = merge_tax(paste(runs, collapse=' '), tax_merged_file, conda_env)\ncat('Output file:', tax_merged_file, '\\n')",
"CMD: qiime feature-table merge-taxa --i-data /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA//Run0116/taxonomy.qza /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA//Run0122/taxonomy.qza /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA//Run0126/taxonomy.qza /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA//Run0189/taxonomy.qza /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA//Run0190/taxonomy.qza --o-merged-data /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA/merged//taxonomy.qza \nSaved FeatureData[Taxonomy] to: /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA/merged//taxonomy.qza \nOutput file: /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA/merged//taxonomy.qza \n"
]
],
[
[
"# Alignment",
"_____no_output_____"
],
[
"## Creating alignment",
"_____no_output_____"
]
],
[
[
"aln_file = file.path(work_dir, 'aligned-rep-seqs_filt.qza')\n\ncmd = 'qiime alignment mafft --p-n-threads {threads} --i-sequences {in_seq} --o-alignment {out_aln}'\ncmd = glue::glue(cmd, threads=threads, in_seq=seqs_merged_file, out_aln=aln_file)\nbash_job(cmd, conda_env=conda_env, stderr=TRUE)\n",
"Saved FeatureData[AlignedSequence] to: /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA/merged//aligned-rep-seqs_filt.qza"
]
],
[
[
"## Masking alignment",
"_____no_output_____"
]
],
[
[
"aln_mask_file = file.path(work_dir, 'aligned-rep-seqs_filt_masked.qza')\n\ncmd = 'qiime alignment mask --i-alignment {in_aln} --o-masked-alignment {out_aln}'\ncmd = glue::glue(cmd, in_aln=aln_file, out_aln=aln_mask_file)\nbash_job(cmd, conda_env=conda_env, stderr=TRUE)\n",
"Saved FeatureData[AlignedSequence] to: /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA/merged//aligned-rep-seqs_filt_masked.qza"
]
],
[
[
"# Phylogeny",
"_____no_output_____"
],
[
"## Unrooted tree",
"_____no_output_____"
]
],
[
[
"phy_unroot_file = file.path(work_dir, 'aligned-rep-seqs_filt_masked_unroot-tree.qza')\n\ncmd = 'qiime phylogeny fasttree --p-n-threads {threads} --i-alignment {in_aln} --o-tree {out_phy}'\ncmd = glue::glue(cmd, threads=threads, in_aln=aln_mask_file, out_phy=phy_unroot_file)\nbash_job(cmd, conda_env=conda_env, stderr=TRUE)\n",
"Saved Phylogeny[Unrooted] to: /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA/merged//aligned-rep-seqs_filt_masked_unroot-tree.qza"
]
],
[
[
"## Rooted tree",
"_____no_output_____"
]
],
[
[
"phy_root_file = file.path(work_dir, 'aligned-rep-seqs_filt_masked_midroot-tree.qza')\n\ncmd = 'qiime phylogeny midpoint-root --i-tree {in_phy} --o-rooted-tree {out_phy}'\ncmd = glue::glue(cmd, in_phy=phy_unroot_file, out_phy=phy_root_file)\nbash_job(cmd, conda_env=conda_env, stderr=TRUE)\n",
"Saved Phylogeny[Rooted] to: /ebio/abt3_projects/Georg_animal_feces/data/16S_arch/MiSeq-Runs-116-122-126-189-190/LLA/merged//aligned-rep-seqs_filt_masked_midroot-tree.qza"
]
],
[
[
"# sessionInfo",
"_____no_output_____"
]
],
[
[
"sessionInfo()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0da0a1e86970ffcd35b5bc74cee8fd466d2114a | 24,705 | ipynb | Jupyter Notebook | dati_2014/04/example.ipynb | shishitao/boffi_dynamics | 365f16d047fb2dbfc21a2874790f8bef563e0947 | [
"MIT"
] | null | null | null | dati_2014/04/example.ipynb | shishitao/boffi_dynamics | 365f16d047fb2dbfc21a2874790f8bef563e0947 | [
"MIT"
] | null | null | null | dati_2014/04/example.ipynb | shishitao/boffi_dynamics | 365f16d047fb2dbfc21a2874790f8bef563e0947 | [
"MIT"
] | 2 | 2019-06-23T12:32:39.000Z | 2021-08-15T18:33:55.000Z | 122.30198 | 20,151 | 0.860959 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d0da256acc1390e5828040ed23bea7d864ac40dd | 689,398 | ipynb | Jupyter Notebook | pra1_tipologia.ipynb | creinaf/TCVD-Practica-1-Web-scraping | 6c677d5e2e2ef53ddb9b68af796590d504ec9622 | [
"CC0-1.0"
] | null | null | null | pra1_tipologia.ipynb | creinaf/TCVD-Practica-1-Web-scraping | 6c677d5e2e2ef53ddb9b68af796590d504ec9622 | [
"CC0-1.0"
] | null | null | null | pra1_tipologia.ipynb | creinaf/TCVD-Practica-1-Web-scraping | 6c677d5e2e2ef53ddb9b68af796590d504ec9622 | [
"CC0-1.0"
] | null | null | null | 404.813858 | 47,916 | 0.920986 | [
[
[
"# Importamos las librerías necesarias\nfrom bs4 import BeautifulSoup\nimport requests\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport statistics as st",
"_____no_output_____"
],
[
"# Fijamos url de la web\nurl = 'https://tarifaluzhora.es/'\n\n# Hacemos la petición a la página\nresponse = requests.get(url)\nsoup = BeautifulSoup(response.text, 'html.parser')\n\n# Obtenemos las horas\nhoras = soup.find_all('span', itemprop=\"description\")\n\n# Obtenemos los precios\nprecios = soup.find_all('span', itemprop=\"price\")\n\n# Obtenemos la fecha\ndate = soup.find('input', {'name': 'date'}).get('value')\n\n# Creamos un array con el contenido de las horas\ncolumnas = ['fecha']\nfor h in horas:\n columnas.append(h.text)\n\n# Creamos un array con el contenido de los precios\ncontenido = [date]\nfor p in precios:\n contenido.append(p.text)\n\n\n# Creamos un dataset con los datos del día actual, cuyas columnas son la fecha y las horas\ndf = pd.DataFrame(data=[np.array(contenido)], columns=columnas)\ndf",
"_____no_output_____"
],
[
"# Creamos array vacio de urls\nurls=[]\n\n# Recorremos rango de fechas hacia atrás\nfor i in range(2022, 2020, -1):\n # Si la fecha es 2022 solo queremos los tres primeros meses\n if i==2022:\n for j in range(3,0,-1):\n # Para febrero sólo recorremos 28 días\n if j==2:\n for k in range(28,0,-1):\n url = 'https://tarifaluzhora.es/?tarifa=pcb&fecha='+str(k).zfill(2)+'%2F'+str(j).zfill(2)+'%2F'+str(i)\n urls.append(url)\n else:\n # Para el resto de meses recorremos 31\n for k in range(31,0,-1):\n url = 'https://tarifaluzhora.es/?tarifa=pcb&fecha='+str(k).zfill(2)+'%2F'+str(j).zfill(2)+'%2F'+str(i)\n urls.append(url)\n # Si la fecha es 2021 solo queremos hasta junio\n else:\n for j in range(12,5,-1):\n # Para junio, septiembre y noviembre recorremos 30 días\n if (j==6) | (j==9) | (j==11):\n for k in range(30,0,-1):\n url = 'https://tarifaluzhora.es/?tarifa=pcb&fecha='+str(k).zfill(2)+'%2F'+str(j).zfill(2)+'%2F'+str(i)\n urls.append(url)\n else:\n # Para el resto recorremos 31 días\n for k in range(31,0,-1):\n url = 'https://tarifaluzhora.es/?tarifa=pcb&fecha='+str(k).zfill(2)+'%2F'+str(j).zfill(2)+'%2F'+str(i)\n urls.append(url)",
"_____no_output_____"
],
[
"# Recorremos el array de urls\nfor i in urls:\n # Fijamos url de la web\n url = i\n\n # Hacemos la petición a la página\n response = requests.get(url)\n soup = BeautifulSoup(response.text, 'html.parser')\n\n # Obtenemos los precios\n precios = soup.find_all('span', itemprop=\"price\")\n\n # Obtenemos la fecha\n fecha = soup.find('input', {'name': 'date'}).get('value')\n\n # Creamos un array con el contenido de las horas\n columnas = ['fecha']\n for h in horas:\n columnas.append(h.text)\n\n # Creamos un array con el contenido de los precios\n contenido = [fecha]\n for p in precios:\n contenido.append(p.text)\n\n # Creamos el df\n df1 = pd.DataFrame(data=[np.array(contenido)], columns=columnas)\n\n # Lo unimos al df original para crear la bd\n df = pd.concat([df, df1])",
"_____no_output_____"
],
[
"print(float(precios[1].text.split(' ')[0]))\nprint(type(float(precios[1].text.split(' ')[0])))",
"0.11595\n<class 'float'>\n"
],
[
"# Cabecera del dataset\ndf.head()",
"_____no_output_____"
],
[
"# Cola del dataset\ndf.tail()",
"_____no_output_____"
],
[
"# Convertimos todos los precios a datos de tipo float\n\nfor i in range(1,len(df.columns)):\n for j in range(0, len(df)):\n df.iloc[j][i] = float(df.iloc[j][i].split(' ')[0])",
" fecha 00h - 01h: 01h - 02h: 02h - 03h: 03h - 04h: 04h - 05h: \\\n0 05/04/2022 0.30614 0.29737 0.269 0.27546 0.28355 \n0 31/03/2022 0.34653 0.33138 0.33063 0.32335 0.3171 \n0 30/03/2022 0.33468 0.31902 0.30903 0.29593 0.29989 \n0 29/03/2022 0.30449 0.2824 0.27649 0.27374 0.27725 \n0 28/03/2022 0.28296 0.27426 0.26973 0.26371 0.26316 \n.. ... ... ... ... ... ... \n0 05/06/2021 0.1034 0.09697 0.09435 0.09548 0.09431 \n0 04/06/2021 0.10312 0.10295 0.10252 0.10581 0.10577 \n0 03/06/2021 0.10958 0.10608 0.10296 0.10445 0.10462 \n0 02/06/2021 0.1162 0.11437 0.11253 0.11287 0.11185 \n0 01/06/2021 0.11633 0.11595 0.11489 0.11496 0.11484 \n\n 05h - 06h: 06h - 07h: 07h - 08h: 08h - 09h: ... 14h - 15h: 15h - 16h: \\\n0 0.30304 0.28067 0.32942 0.36162 ... 0.3683 0.36857 \n0 0.32118 0.32683 0.34811 0.38216 ... 0.26185 0.25855 \n0 0.3092 0.32472 0.32824 0.37147 ... 0.33943 0.33612 \n0 0.29108 0.32639 0.32426 0.36677 ... 0.32994 0.32321 \n0 0.278 0.3068 0.30449 0.33477 ... 0.2937 0.28725 \n.. ... ... ... ... ... ... ... \n0 0.09451 0.09376 0.09609 0.10283 ... 0.10482 0.10025 \n0 0.10825 0.11109 0.11252 0.14952 ... 0.13715 0.13409 \n0 0.10753 0.11176 0.11282 0.14944 ... 0.13265 0.12378 \n0 0.11197 0.11576 0.11564 0.15296 ... 0.13851 0.13615 \n0 0.11603 0.11629 0.1157 0.15289 ... 0.13796 0.13288 \n\n 16h - 17h: 17h - 18h: 18h - 19h: 19h - 20h: 20h - 21h: 21h - 22h: \\\n0 0.36682 0.3663 0.42192 0.43025 0.44157 0.43377 \n0 0.24957 0.25059 0.31642 0.36794 0.41866 0.42199 \n0 0.32761 0.31185 0.39006 0.4168 0.43635 0.44541 \n0 0.31876 0.33254 0.40217 0.40538 0.41921 0.42276 \n0 0.2872 0.28162 0.39677 0.41407 0.45403 0.44684 \n.. ... ... ... ... ... ... \n0 0.09899 0.09859 0.09919 0.09966 0.1073 0.11092 \n0 0.12952 0.13379 0.23188 0.23658 0.23832 0.23939 \n0 0.12341 0.12676 0.22634 0.23488 0.23745 0.24213 \n0 0.13227 0.13619 0.2312 0.23677 0.23927 0.245 \n0 0.13193 0.13599 0.23144 0.2404 0.2462 0.24808 \n\n 22h - 23h: 23h - 24h: \n0 0.38235 0.37751 \n0 0.34661 0.30279 \n0 0.35615 0.35182 \n0 0.33883 0.32508 \n0 0.34658 0.33382 \n.. ... ... \n0 0.1113 0.10722 \n0 0.14442 0.14065 \n0 0.14956 0.14721 \n0 0.15092 0.14831 \n0 0.15591 0.1565 \n\n[305 rows x 25 columns]\n"
],
[
"# Añadimos una columna que indique en qué unindades se mide el precio de la luz\n\ndf = df.assign(unidad = ['€/kWh' for i in range(0, len(df))])\ndf",
"_____no_output_____"
],
[
"# Exportamos el dataframe a un documento .csv\n\ndf.to_csv(r'export_dataframe.csv', index = False, header=True)",
"_____no_output_____"
],
[
"# Graficamos el precio promedio diario (305 días)\n\nprice_all = []\nprice_day_all = []\n\n\nfor j in range(0, len(df)):\n for i in range(1,len(df.columns)-1):\n price_all.append(df.iloc[j][i])\n \n price_day_all.append(st.mean(price_all))\n\nprice_day_all = list(reversed(price_day_all)) \ndias = list(range(0, len(price_day_all)))\n\nplt.xlabel(\"Días\")\nplt.ylabel(\"Precio €/kWh\")\nplt.title(f'Precio promedio diario de la luz')\nplt.plot(dias, price_day_all)\nplt.show()",
"_____no_output_____"
],
[
"# Graficamos el precio a escala hora y diaria (meses 6, 7, 8, 9, 10, 11, 12 del 2021 y meses 1, 2, 3 del 2022)\n\ndef graficas_meses(precio_hora, precio_dia, mes, mes_num):\n \n if (mes_num == \"/01/\" or mes_num == \"/02/\" or mes_num == \"/03/\"):\n year = 2022\n \n else:\n year = 2021\n \n precio_hora = list(reversed(precio_hora))\n horas = list(range(0, len(precio_hora)))\n \n print(f'El comportamiento del precio de la luz en {mes} de {year} es:\\n')\n \n # Graficamos el precio de cada hora de todos los días del mes\n plt.plot(horas, precio_hora)\n plt.xlabel(\"Horas\")\n plt.ylabel(\"Precio €/kWh\")\n plt.title(f'Precio por horas de la luz en {mes} de {year}')\n plt.show()\n \n precio_dia = list(reversed(precio_dia))\n dia = list(range(0, len(precio_dia)))\n \n # Graficamos el precio promedio de cada día\n plt.plot(dia, precio_dia)\n plt.xlabel(\"Días\")\n plt.ylabel(\"Precio €/kWh\")\n plt.title(f'Precio promedio diario de la luz en {mes} de {year}')\n plt.show()\n \n print(f'El precio mínimo en {mes} de {year} fue de {round(min(precio_hora), 3)} €/kWh y el precio diario promedio mínimo fue de {round(min(precio_dia), 3)} €/kWh\\n')\n print(f'El precio máximo en {mes} de {year} fue de {round(max(precio_hora), 3)} €/kWh y el precio diario promedio máximo fue de {round(max(precio_dia), 3)} €/kWh\\n\\n\\n')\n ",
"_____no_output_____"
],
[
"mes = {'junio':'/06/', 'julio':'/07/', 'agosto':'/08/', 'septiembre':'/09/', 'octubre':'/10/', 'noviembre':'/11/', 'diciembre':'/12/', \n 'enero':'/01/', 'febrero':'/02/', 'marzo':'/03/'}\n\n# Escogemos un sub-dataframe para cada mes y graficarlo por separado\nfor i in range(0, len(mes)):\n price_hour = []\n price_day = []\n df_month = df[df[\"fecha\"].str.contains(list(mes.values())[i])]\n \n \n for j in range(0, len(df_month)):\n for k in range(1, len(df_month.columns)-1):\n # Se selecciona el precio de cada hora\n price_hour.append(df_month.iloc[j][k])\n \n # Se promedia el precio de cada día\n price_day.append(st.mean(price_hour))\n \n # Llamamos a la función encargada de graficar\n graficas_meses(price_hour, price_day, list(mes.keys())[i], list(mes.values())[i])",
"El comportamiento del precio de la luz en junio de 2021 es:\n\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0da443d46e1c379457401bdef47b9f02e9e44e3 | 728,666 | ipynb | Jupyter Notebook | Keras_Tensorflow/03_TestLocally.ipynb | marabout2015/AKSDeploymentTutorial | 8914ac1886405a8a3194e8ad2f9aafe58d19708f | [
"MIT"
] | 1 | 2020-08-06T10:37:49.000Z | 2020-08-06T10:37:49.000Z | Keras_Tensorflow/03_TestLocally.ipynb | marabout2015/AKSDeploymentTutorial | 8914ac1886405a8a3194e8ad2f9aafe58d19708f | [
"MIT"
] | null | null | null | Keras_Tensorflow/03_TestLocally.ipynb | marabout2015/AKSDeploymentTutorial | 8914ac1886405a8a3194e8ad2f9aafe58d19708f | [
"MIT"
] | null | null | null | 1,932.801061 | 559,464 | 0.961214 | [
[
[
"# Test web application locally\n",
"_____no_output_____"
],
[
"This notebook pulls some images and tests them against the local web app running inside the Docker container we made previously.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nfrom testing_utilities import *\nimport requests\n\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"docker_login = 'fboylu'\nimage_name = docker_login + '/kerastf-gpu'",
"_____no_output_____"
]
],
[
[
"Run the Docker conatainer in the background and open port 80. Notice we are using nvidia-docker and not docker command.",
"_____no_output_____"
]
],
[
[
"%%bash --bg -s \"$image_name\"\nnvidia-docker run -p 80:80 $1",
"_____no_output_____"
]
],
[
[
"Wait a few seconds for the application to spin up and then check that everything works.",
"_____no_output_____"
]
],
[
[
"!curl 'http://0.0.0.0:80/'",
"Healthy"
],
[
"!curl 'http://0.0.0.0:80/version'",
"1.4.1"
]
],
[
[
"Pull an image of a Lynx to test our local web app with.",
"_____no_output_____"
]
],
[
[
"IMAGEURL = \"https://upload.wikimedia.org/wikipedia/commons/thumb/6/68/Lynx_lynx_poing.jpg/220px-Lynx_lynx_poing.jpg\"",
"_____no_output_____"
],
[
"plt.imshow(to_img(IMAGEURL))",
"_____no_output_____"
],
[
"jsonimg = img_url_to_json(IMAGEURL)\njsonimg[:100] ",
"_____no_output_____"
],
[
"headers = {'content-type': 'application/json'}\n%time r = requests.post('http://0.0.0.0:80/score', data=jsonimg, headers=headers)\nprint(r)\nr.json()",
"CPU times: user 4.86 ms, sys: 0 ns, total: 4.86 ms\nWall time: 2.4 s\n<Response [200]>\n"
]
],
[
[
"Let's try a few more images.",
"_____no_output_____"
]
],
[
[
"images = ('https://upload.wikimedia.org/wikipedia/commons/thumb/6/68/Lynx_lynx_poing.jpg/220px-Lynx_lynx_poing.jpg',\n 'https://upload.wikimedia.org/wikipedia/commons/3/3a/Roadster_2.5_windmills_trimmed.jpg',\n 'http://www.worldshipsociety.org/wp-content/themes/construct/lib/scripts/timthumb/thumb.php?src=http://www.worldshipsociety.org/wp-content/uploads/2013/04/stock-photo-5495905-cruise-ship.jpg&w=570&h=370&zc=1&q=100',\n 'http://yourshot.nationalgeographic.com/u/ss/fQYSUbVfts-T7pS2VP2wnKyN8wxywmXtY0-FwsgxpiZv_E9ZfPsNV5B0ER8-bOdruvNfMD5EbP4SznWz4PYn/',\n 'https://cdn.arstechnica.net/wp-content/uploads/2012/04/bohol_tarsier_wiki-4f88309-intro.jpg',\n 'http://i.telegraph.co.uk/multimedia/archive/03233/BIRDS-ROBIN_3233998b.jpg')",
"_____no_output_____"
],
[
"url = 'http://0.0.0.0:80/score'\nresults = [requests.post(url, data=img_url_to_json(img), headers=headers) for img in images]",
"_____no_output_____"
],
[
"plot_predictions_dict(images, results)",
"/anaconda/envs/AKSDeploymentKeras/lib/python3.5/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Passing one of 'on', 'true', 'off', 'false' as a boolean is deprecated; use an actual boolean (True/False) instead.\n warnings.warn(message, mplDeprecation, stacklevel=1)\n"
]
],
[
[
"Next let's quickly check what the request response performance is for the locally running Docker container.",
"_____no_output_____"
]
],
[
[
"image_data = list(map(img_url_to_json, images)) # Retrieve the images and data",
"_____no_output_____"
],
[
"timer_results = list()\nfor img in image_data:\n res=%timeit -r 1 -o -q requests.post(url, data=img, headers=headers)\n timer_results.append(res.best)",
"_____no_output_____"
],
[
"timer_results",
"_____no_output_____"
],
[
"print('Average time taken: {0:4.2f} ms'.format(10**3 * np.mean(timer_results)))",
"Average time taken: 85.31 ms\n"
],
[
"%%bash\ndocker stop $(docker ps -q)",
"edb629d68eca\n"
]
],
[
[
"We can now [deploy our web application on AKS](04_DeployOnAKS.ipynb).",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0da452818c6df687f2dcac1e4af28a61cefeb94 | 22,645 | ipynb | Jupyter Notebook | notebooks/Correlation_Spanning_Tree_example.ipynb | Carromattsson/netrd | ec3360b5a9e241f318d7c8e729db3aeed9f2e5b7 | [
"MIT"
] | null | null | null | notebooks/Correlation_Spanning_Tree_example.ipynb | Carromattsson/netrd | ec3360b5a9e241f318d7c8e729db3aeed9f2e5b7 | [
"MIT"
] | null | null | null | notebooks/Correlation_Spanning_Tree_example.ipynb | Carromattsson/netrd | ec3360b5a9e241f318d7c8e729db3aeed9f2e5b7 | [
"MIT"
] | null | null | null | 157.256944 | 19,762 | 0.896754 | [
[
[
"import numpy as np\nimport networkx as nx\nfrom matplotlib import pyplot as plt\nfrom netrd.reconstruction import CorrelationSpanningTree\n%matplotlib inline",
"_____no_output_____"
],
[
"N = 25\nT = 300\nM = np.random.normal(size=(N,T))",
"_____no_output_____"
],
[
"print('Create correlated time series')\nmarket_mode = 0.4*np.random.normal(size=(1,T))\nM += market_mode\n\nsector_modes = {d: 0.5*np.random.normal(size=(1,T)) for d in range(5)}\nfor sector_mode, vals in sector_modes.items():\n M[sector_mode*5:(sector_mode+1)*5,:] += vals",
"Create correlated time series\n"
],
[
"print('Associate node colors to sectors')\ncolors = ['b','r','g','y','m']\nnode_colors = [color for color in colors for __ in range(5)]",
"Associate node colors to sectors\n"
],
[
"print('Network reconstruction step')\ncst_net = CorrelationSpanningTree()\nG = cst_net.fit(M)",
"Network reconstruction step\n"
],
[
"print('Plot reconstructed spanning tree')\nfig, ax = plt.subplots()\nnx.draw(G, ax=ax, node_color=node_colors)",
"Plot reconstructed spanning tree\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0da467e27ae069661224f59f471d0e4daf1d6c1 | 86,349 | ipynb | Jupyter Notebook | week02_classification/homework_part1.ipynb | andnlv/natural-language-processing | 954f390d43a44ccb2d79e6e846ab5e0f2e57f70b | [
"MIT"
] | null | null | null | week02_classification/homework_part1.ipynb | andnlv/natural-language-processing | 954f390d43a44ccb2d79e6e846ab5e0f2e57f70b | [
"MIT"
] | null | null | null | week02_classification/homework_part1.ipynb | andnlv/natural-language-processing | 954f390d43a44ccb2d79e6e846ab5e0f2e57f70b | [
"MIT"
] | null | null | null | 140.63355 | 26,452 | 0.866924 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"### Homework part I: Prohibited Comment Classification (3 points)\n\n\n\n__In this notebook__ you will build an algorithm that classifies social media comments into normal or toxic.\nLike in many real-world cases, you only have a small (10^3) dataset of hand-labeled examples to work with. We'll tackle this problem using both classical nlp methods and embedding-based approach.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\ndata = pd.read_csv(\"comments.tsv\", sep='\\t')\n\ntexts = data['comment_text'].values\ntarget = data['should_ban'].values\ndata[50::200]",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\ntexts_train, texts_test, y_train, y_test = train_test_split(texts, target, test_size=0.5, random_state=42)",
"_____no_output_____"
]
],
[
[
"__Note:__ it is generally a good idea to split data into train/test before anything is done to them.\n\nIt guards you against possible data leakage in the preprocessing stage. For example, should you decide to select words present in obscene tweets as features, you should only count those words over the training set. Otherwise your algoritm can cheat evaluation.",
"_____no_output_____"
],
[
"### Preprocessing and tokenization\n\nComments contain raw text with punctuation, upper/lowercase letters and even newline symbols.\n\nTo simplify all further steps, we'll split text into space-separated tokens using one of nltk tokenizers.",
"_____no_output_____"
]
],
[
[
"from nltk.tokenize import TweetTokenizer\ntokenizer = TweetTokenizer()\npreprocess = lambda text: ' '.join(tokenizer.tokenize(text.lower()))\n\ntext = 'How to be a grown-up at work: replace \"fuck you\" with \"Ok, great!\".'\nprint(\"before:\", text,)\nprint(\"after:\", preprocess(text),)",
"before: How to be a grown-up at work: replace \"fuck you\" with \"Ok, great!\".\nafter: how to be a grown-up at work : replace \" fuck you \" with \" ok , great ! \" .\n"
],
[
"# task: preprocess each comment in train and test\n\n\ntexts_train = np.array(list(map(preprocess, texts_train)))\ntexts_test = np.array(list(map(preprocess, texts_test)))",
"_____no_output_____"
],
[
"assert texts_train[5] == 'who cares anymore . they attack with impunity .'\nassert texts_test[89] == 'hey todds ! quick q ? why are you so gay'\nassert len(texts_test) == len(y_test)",
"_____no_output_____"
]
],
[
[
"### Solving it: bag of words\n\n\n\nOne traditional approach to such problem is to use bag of words features:\n1. build a vocabulary of frequent words (use train data only)\n2. for each training sample, count the number of times a word occurs in it (for each word in vocabulary).\n3. consider this count a feature for some classifier\n\n__Note:__ in practice, you can compute such features using sklearn. Please don't do that in the current assignment, though.\n* `from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer`",
"_____no_output_____"
]
],
[
[
"# task: find up to k most frequent tokens in texts_train,\n# sort them by number of occurences (highest first)\nk = 10000\n\nfrom collections import Counter\n\nbow_vocabulary = list(zip(*Counter((' '.join(texts_train)).split()).most_common(k)))[0]\n\nprint('example features:', sorted(bow_vocabulary)[::100])",
"example features: ['!', 'came', 'faggot', 'lets', 'punctuation', 'theoretical']\n"
],
[
"def text_to_bow(text):\n \"\"\" convert text string to an array of token counts. Use bow_vocabulary. \"\"\"\n \n counter = Counter(text.split())\n return np.array(list(counter[word] for word in bow_vocabulary), 'float32')",
"_____no_output_____"
],
[
"X_train_bow = np.stack(list(map(text_to_bow, texts_train)))\nX_test_bow = np.stack(list(map(text_to_bow, texts_test)))",
"_____no_output_____"
],
[
"k_max = len(set(' '.join(texts_train).split()))\nassert X_train_bow.shape == (len(texts_train), min(k, k_max))\nassert X_test_bow.shape == (len(texts_test), min(k, k_max))\nassert np.all(X_train_bow[5:10].sum(-1) == np.array([len(s.split()) for s in texts_train[5:10]]))\nassert len(bow_vocabulary) <= min(k, k_max)\nassert X_train_bow[6, bow_vocabulary.index('.')] == texts_train[6].split().count('.')",
"_____no_output_____"
]
],
[
[
"Machine learning stuff: fit, predict, evaluate. You know the drill.",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LogisticRegression\nbow_model = LogisticRegression().fit(X_train_bow, y_train)",
"_____no_output_____"
],
[
"from sklearn.metrics import roc_auc_score, roc_curve\n\nfor name, X, y, model in [\n ('train', X_train_bow, y_train, bow_model),\n ('test ', X_test_bow, y_test, bow_model)\n]:\n proba = model.predict_proba(X)[:, 1]\n auc = roc_auc_score(y, proba)\n plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))\n\nplt.plot([0, 1], [0, 1], '--', color='black',)\nplt.legend(fontsize='large')\nplt.grid()",
"_____no_output_____"
]
],
[
[
"### Task: implement TF-IDF features\n\nNot all words are equally useful. One can prioritize rare words and downscale words like \"and\"/\"or\" by using __tf-idf features__. This abbreviation stands for __text frequency/inverse document frequence__ and means exactly that:\n\n$$ feature_i = { Count(word_i \\in x) \\times { log {N \\over Count(word_i \\in D) + \\alpha} }} $$\n\n\n, where x is a single text D is your dataset (a collection of texts), N is total number of words and $\\alpha$ is a smoothing hyperparameter (typically 1).\n\nIt may also be a good idea to normalize each data sample after computing tf-idf features.\n\n__Your task:__ implement tf-idf features, train a model and evaluate ROC curve. Compare it with basic BagOfWords model from above.\n\nPlease don't use sklearn/nltk builtin tf-idf vectorizers in your solution :) You can still use 'em for debugging though.",
"_____no_output_____"
]
],
[
[
"def tf_idf(texts_collection, a = 1):\n global_counts = text_to_bow(' '.join(texts_collection))\n return np.stack(list(map(text_to_bow, texts_train))) * np.log(len(texts_collection)/(global_counts + a))",
"_____no_output_____"
],
[
"idf = np.log(len(texts_train)/(global_counts + 1))\nX_train_tfidf = X_train_bow * idf\nX_test_tfidf = X_test_bow * idf",
"_____no_output_____"
],
[
"tfidf_model = LogisticRegression().fit(X_train_tfidf, y_train)",
"_____no_output_____"
],
[
"for name, X, y, model in [\n ('train', X_train_tfidf, y_train, tfidf_model),\n ('test ', X_test_tfidf, y_test, tfidf_model)\n]:\n proba = model.predict_proba(X)[:, 1]\n auc = roc_auc_score(y, proba)\n plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))\n\nplt.plot([0, 1], [0, 1], '--', color='black',)\nplt.legend(fontsize='large')\nplt.grid()",
"_____no_output_____"
]
],
[
[
"```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n```\n\n\n### Solving it better: word vectors\n\nLet's try another approach: instead of counting per-word frequencies, we shall map all words to pre-trained word vectors and average over them to get text features.\n\nThis should give us two key advantages: (1) we now have 10^2 features instead of 10^4 and (2) our model can generalize to word that are not in training dataset.\n\nWe begin with a standard approach with pre-trained word vectors. However, you may also try\n* training embeddings from scratch on relevant (unlabeled) data\n* multiplying word vectors by inverse word frequency in dataset (like tf-idf).\n* concatenating several embeddings\n * call `gensim.downloader.info()['models'].keys()` to get a list of available models\n* clusterizing words by their word-vectors and try bag of cluster_ids\n\n__Note:__ loading pre-trained model may take a while. It's a perfect opportunity to refill your cup of tea/coffee and grab some extra cookies. Or binge-watch some tv series if you're slow on internet connection",
"_____no_output_____"
]
],
[
[
"import gensim.downloader \nembeddings = gensim.downloader.load(\"fasttext-wiki-news-subwords-300\")\n\n# If you're low on RAM or download speed, use \"glove-wiki-gigaword-100\" instead. Ignore all further asserts.",
"[==================================================] 100.0% 958.5/958.4MB downloaded\n"
],
[
"type(embeddings)",
"_____no_output_____"
],
[
"embeddings.get_vector('qweqw', )",
"_____no_output_____"
],
[
"def vectorize_sum(comment):\n \"\"\"\n implement a function that converts preprocessed comment to a sum of token vectors\n \"\"\"\n embedding_dim = embeddings.vectors.shape[1]\n features = np.zeros([embedding_dim], dtype='float32')\n \n features += np.sum(embeddings[word] for word in comment.split() if word in embeddings.vocab)\n \n return features\n\nassert np.allclose(\n vectorize_sum(\"who cares anymore . they attack with impunity .\")[::70],\n np.array([ 0.0108616 , 0.0261663 , 0.13855131, -0.18510573, -0.46380025])\n)",
"_____no_output_____"
],
[
"X_train_wv = np.stack([vectorize_sum(text) for text in texts_train])\nX_test_wv = np.stack([vectorize_sum(text) for text in texts_test])",
"_____no_output_____"
],
[
"wv_model = LogisticRegression().fit(X_train_wv, y_train)\n\nfor name, X, y, model in [\n ('bow train', X_train_bow, y_train, bow_model),\n ('bow test ', X_test_bow, y_test, bow_model),\n ('vec train', X_train_wv, y_train, wv_model),\n ('vec test ', X_test_wv, y_test, wv_model)\n]:\n proba = model.predict_proba(X)[:, 1]\n auc = roc_auc_score(y, proba)\n plt.plot(*roc_curve(y, proba)[:2], label='%s AUC=%.4f' % (name, auc))\n\nplt.plot([0, 1], [0, 1], '--', color='black',)\nplt.legend(fontsize='large')\nplt.grid()\n\nassert roc_auc_score(y_test, wv_model.predict_proba(X_test_wv)[:, 1]) > 0.92, \"something's wrong with your features\"",
"_____no_output_____"
]
],
[
[
"If everything went right, you've just managed to reduce misclassification rate by a factor of two.\nThis trick is very useful when you're dealing with small datasets. However, if you have hundreds of thousands of samples, there's a whole different range of methods for that. We'll get there in the second part.",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0da985e4a394b971e793f37359ad98c64e84cc1 | 4,362 | ipynb | Jupyter Notebook | Day6 Assignment.ipynb | divi2398/Lakshmi.M | 8f95c2db8994a6b97a33e2c0137388964d20c991 | [
"Apache-2.0"
] | null | null | null | Day6 Assignment.ipynb | divi2398/Lakshmi.M | 8f95c2db8994a6b97a33e2c0137388964d20c991 | [
"Apache-2.0"
] | null | null | null | Day6 Assignment.ipynb | divi2398/Lakshmi.M | 8f95c2db8994a6b97a33e2c0137388964d20c991 | [
"Apache-2.0"
] | null | null | null | 21.701493 | 108 | 0.481889 | [
[
[
"# Assignments of Day 6 ",
"_____no_output_____"
]
],
[
[
"# Create a bank account class that has two attributes\n# arr = []\n# arr.append([ownername,balance])\n# for i in arr:\n# if self.name == i[0]:\n\nclass bank:\n def __init__(self,ownername,balance):\n self.ownername = ownername\n self.balance = balance\n\n def deposit(self,amount):\n self.balance +=amount\n print(\"your updated balance is :\", self.balance)\n \n def withdraw(self,amount):\n if(self.balance>amount):\n self.balance-=amount\n print(\"your updated balance is :\", self.balance)\n else:\n print(\"you don't have enough cradit in your account, see you have only\",self.balance)\n \n ",
"_____no_output_____"
],
[
"anmol = bank(\"Anmolnoor\",45000)",
"_____no_output_____"
],
[
"anmol.deposit(5000)",
"your updated balance is : 50000\n"
],
[
"anmol.withdraw(45000)",
"your updated balance is : 5000\n"
],
[
"anmol.withdraw(15000)",
"you don't have enough cradit in your account, see you have only 5000\n"
]
],
[
[
"# For this challenge,create a cone class that has two attributes:\n*R=Radius\n*h=Height\nAnd two methods:\n*Volume = Π * r2 = (h/3)\n*Surface area : base : Π * r2 , side : Π * r * √(r2 + h2)\nMake only one class with functions,as in where required import Math.",
"_____no_output_____"
]
],
[
[
"import math\nclass cone:\n def __init__(self,radius,height):\n self.radius=radius\n self.height=height\n \n def volume(self):\n vol = math.pi * (self.radius**2) * (self.height/3)\n print(\"Volume of this cone is : \",vol)\n \n def surfaceArea(self):\n area = math.pi* self.radius *(self.radius+(math.sqrt((self.radius**2)+(self.height**2))))\n print(\"Surface area of this cone is \",area)",
"_____no_output_____"
],
[
"con = cone(5,6)",
"_____no_output_____"
],
[
"con.volume()",
"Volume of this cone is : 157.07963267948966\n"
],
[
"con.surfaceArea()",
"Surface area of this cone is 201.22293136239685\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0daad661e61238bd30e07acc3231da29b7045cd | 6,455 | ipynb | Jupyter Notebook | Machine_Learning_Model/Action_template/.ipynb_checkpoints/Iris Deltas-checkpoint.ipynb | kaitlin31415/BCI4KidsMediapipe | 913ad540716bec476148a3f31001b279c86d9297 | [
"Apache-2.0"
] | 5 | 2021-10-04T20:55:37.000Z | 2022-01-31T22:12:31.000Z | Machine_Learning_Model/Action_template/.ipynb_checkpoints/Iris Deltas-checkpoint.ipynb | kaitlin31415/BCI4KidsMediapipe | 913ad540716bec476148a3f31001b279c86d9297 | [
"Apache-2.0"
] | 158 | 2021-09-29T23:43:08.000Z | 2022-03-31T21:05:46.000Z | Machine_Learning_Model/Action_template/.ipynb_checkpoints/Iris Deltas-checkpoint.ipynb | kaitlin31415/BCI4KidsMediapipe | 913ad540716bec476148a3f31001b279c86d9297 | [
"Apache-2.0"
] | 3 | 2021-09-27T23:00:36.000Z | 2022-01-31T22:12:33.000Z | 29.746544 | 763 | 0.525174 | [
[
[
"import numpy as np\nimport pandas as pd\nimport os",
"_____no_output_____"
],
[
"# Path for exported data, numpy arrays\nDATA_PATH = os.path.join('eyes_30_70') \n\n# Actions that we try to detect\nactions = np.array(['yes', 'no' ])\n\n# Thirty videos worth of data\nno_sequences = 30\n\n# Videos are going to be 30 frames in length\nsequence_length = 70\n\n\nres = np.load(os.path.join(DATA_PATH, actions[0], \"0\", \"{}.npy\".format(2)))",
"_____no_output_____"
],
[
"len(res)\n",
"_____no_output_____"
],
[
"# Path for exported data, numpy arrays\nDATA_PATH = os.path.join('TrainingScript/eyes_30_70_lm') \n\n# Actions that we try to detect\nactions = np.array(['yes', 'no' ])\n\n# Thirty videos worth of data\nno_sequences = 30\n\n# Videos are going to be 30 frames in length\nsequence_length = 70\n\n\nres = np.load(os.path.join(DATA_PATH, actions[0], \"0\", \"{}.npy\".format(2)))",
"_____no_output_____"
],
[
"len(res)",
"_____no_output_____"
],
[
"## First 5 landmarks (15 points are the RIGHT iris landmarks) [0:15]\n## Next 5 Lankmarks (15 points are the LEFT iris landmarks) [15:30]\n## Next 17 Landmarks (51 points are RIGHT EYE) [30:81]\n## Next 17 Lankmarks (51 points are LEFT EYE) [81:132]\n## Next 2 Landmarks (6 points are LEFT EYE ANCHORS) [132:138]\n## Next 2 Landmarks (6 points are RIGHT EYE ANCHORS) [138:144]\n",
"_____no_output_____"
],
[
"def compareIrisLandmarks(irisLandmarks, eyeLandmarks, eyeAnchors):\n deltaVals = []\n for i in range(0, len(irisLandmarks), 3):\n x = irisLandmarks[i]\n y = irisLandmarks[i+1]\n z = irisLandmarks[i+2]\n \n #compare to \n for j in range(0, len(eyeLandmarks), 3):\n x_c = eyeLandmarks[j]\n y_c = eyeLandmarks[j+1]\n z_c = eyeLandmarks[j+2]\n \n deltaVals.append(x - x_c)\n deltaVals.append(y - y_c)\n deltaVals.append(z - z_c)\n \n for j in range(0, len(eyeAnchors), 3):\n x_c = eyeLandmarks[j]\n y_c = eyeLandmarks[j+1]\n z_c = eyeLandmarks[j+2]\n \n deltaVals.append(x - x_c)\n deltaVals.append(y - y_c)\n deltaVals.append(z - z_c)\n return deltaVals\n \n ",
"_____no_output_____"
],
[
"rightEyeDeltas = compareIrisLandmarks(res[0:15], res[30:81], res[138:])",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0dace9ef3c0c22b725504721ce464d80559629e | 24,434 | ipynb | Jupyter Notebook | notebooks/analyze-customer-data.ipynb | aimazin/customer-data-with-spark | b36d0fac232e7c48966c8a589a70009737851614 | [
"Apache-2.0"
] | 15 | 2018-08-24T03:57:33.000Z | 2021-06-13T04:33:46.000Z | notebooks/analyze-customer-data.ipynb | aimazin/customer-data-with-spark | b36d0fac232e7c48966c8a589a70009737851614 | [
"Apache-2.0"
] | 2 | 2019-07-11T14:54:52.000Z | 2019-07-16T18:34:41.000Z | notebooks/analyze-customer-data.ipynb | aimazin/customer-data-with-spark | b36d0fac232e7c48966c8a589a70009737851614 | [
"Apache-2.0"
] | 22 | 2018-07-25T17:40:56.000Z | 2021-08-04T05:10:54.000Z | 32.929919 | 387 | 0.575755 | [
[
[
"# Data analysis with Python, Apache Spark, and PixieDust\n***\n\nIn this notebook you will:\n\n* analyze customer demographics, such as, age, gender, income, and location\n* combine that data with sales data to examine trends for product categories, transaction types, and product popularity\n* load data from GitHub as well as from a public open data set\n* cleanse, shape, and enrich the data, and then visualize the data with the PixieDust library\n\nDon't worry! PixieDust graphs don't require coding. \n\nBy the end of the notebook, you will understand how to combine data to gain insights about which customers you might target to increase sales.\n\nThis notebook runs on Python 2 with Spark 2.1, and PixieDust 1.1.10.",
"_____no_output_____"
],
[
"<a id=\"toc\"></a>\n## Table of contents\n\n#### [Setup](#Setup)\n[Load data into the notebook](#Load-data-into-the-notebook)\n#### [Explore customer demographics](#part1)\n[Prepare the customer data set](#Prepare-the-customer-data-set)<br>\n[Visualize customer demographics and locations](#Visualize-customer-demographics-and-locations)<br>\n[Enrich demographic information with open data](#Enrich-demographic-information-with-open-data)<br> \n\n#### [Summary and next steps](#summary)",
"_____no_output_____"
],
[
"## Setup\nYou need to import libraries and load the customer data into this notebook.",
"_____no_output_____"
],
[
"Import the necessary libraries:",
"_____no_output_____"
]
],
[
[
"import pixiedust\nimport pyspark.sql.functions as func\nimport pyspark.sql.types as types\nimport re\nimport json\nimport os\nimport requests ",
"_____no_output_____"
]
],
[
[
"**If you get any errors or if a package is out of date:**\n\n* uncomment the lines in the next cell (remove the `#`)\n* restart the kernel (from the Kernel menu at the top of the notebook)\n* reload the browser page\n* run the cell above, and continue with the notebook",
"_____no_output_____"
]
],
[
[
"#!pip install jinja2 --user --upgrade\n#!pip install pixiedust --user --upgrade\n#!pip install -U --no-deps bokeh",
"_____no_output_____"
]
],
[
[
"### Load data into the notebook\n\nThe data file contains both the customer demographic data that you'll analyzed in Part 1, and the sales transaction data for Part 2.\n\nWith `pixiedust.sampleData()` you can load csv data from any url. The below loads the data in a Spark DataFrame. \n\n> In case you wondered, this works with Pandas as well, just add `forcePandas = True` to load data in a Pandas DataFrame. *But do not add this to the below cell as in this notebook you will use Spark.*",
"_____no_output_____"
]
],
[
[
"raw_df = pixiedust.sampleData('https://raw.githubusercontent.com/IBM/analyze-customer-data-spark-pixiedust/master/data/customers_orders1_opt.csv')",
"_____no_output_____"
],
[
"raw_df",
"_____no_output_____"
]
],
[
[
"[Back to Table of Contents](#toc)\n<a id=\"part1\"></a>\n# Explore customer demographics \nIn this part of the notebook, you will prepare the customer data and then start learning about your customers by creating multiple charts and maps. ",
"_____no_output_____"
],
[
"## Prepare the customer data set\nCreate a new Spark DataFrame with only the data you need and then cleanse and enrich the data.\n\nExtract the columns that you are interested in, remove duplicate customers, and add a column for aggregations:",
"_____no_output_____"
]
],
[
[
"# Extract the customer information from the data set\ncustomer_df = raw_df.select(\"CUST_ID\", \n \"CUSTNAME\", \n \"ADDRESS1\", \n \"ADDRESS2\", \n \"CITY\", \n \"POSTAL_CODE\", \n \"POSTAL_CODE_PLUS4\", \n \"STATE\", \n \"COUNTRY_CODE\", \n \"EMAIL_ADDRESS\", \n \"PHONE_NUMBER\",\n \"AGE\",\n \"GenderCode\",\n \"GENERATION\",\n \"NATIONALITY\", \n \"NATIONAL_ID\", \n \"DRIVER_LICENSE\").dropDuplicates()\n\ncustomer_df.printSchema()",
"_____no_output_____"
]
],
[
[
"Notice that the data type of the AGE column is currently a string. Convert the AGE column to a numeric data type so you can run calculations on customer age.",
"_____no_output_____"
]
],
[
[
"# ---------------------------------------\n# Cleanse age (enforce numeric data type) \n# ---------------------------------------\n\ndef getNumericVal(col):\n \"\"\"\n input: pyspark.sql.types.Column\n output: the numeric value represented by col or None\n \"\"\"\n try:\n return int(col)\n except ValueError:\n # age-33\n match = re.match('^age\\-(\\d+)$', col)\n if match:\n try:\n return int(match.group(1))\n except ValueError: \n return None\n return None \n\ntoNumericValUDF = func.udf(lambda c: getNumericVal(c), types.IntegerType())\ncustomer_df = customer_df.withColumn(\"AGE\", toNumericValUDF(customer_df[\"AGE\"]))\ncustomer_df",
"_____no_output_____"
],
[
"customer_df.show(5)",
"_____no_output_____"
]
],
[
[
"The GenderCode column contains salutations instead of gender values. Derive the gender information for each customer based on the salutation and rename the GenderCode column to GENDER.",
"_____no_output_____"
]
],
[
[
"# ------------------------------\n# Derive gender from salutation\n# ------------------------------\ndef deriveGender(col):\n \"\"\" input: pyspark.sql.types.Column\n output: \"male\", \"female\" or \"unknown\"\n \"\"\" \n if col in ['Mr.', 'Master.']:\n return 'male'\n elif col in ['Mrs.', 'Miss.']:\n return 'female'\n else:\n return 'unknown';\n \nderiveGenderUDF = func.udf(lambda c: deriveGender(c), types.StringType())\ncustomer_df = customer_df.withColumn(\"GENDER\", deriveGenderUDF(customer_df[\"GenderCode\"]))\ncustomer_df.cache()",
"_____no_output_____"
]
],
[
[
"## Explore the customer data set\n\nInstead of exploring the data with `.printSchema()` and `.show()` you can quickly explore data sets using PixieDust'. Invoke the `display()` command and click the table icon to review the schema and preview the data. Customize the options to display only a subset of the fields or rows or apply a filter (by clicking the funnel icon).",
"_____no_output_____"
]
],
[
[
"display(customer_df)",
"_____no_output_____"
]
],
[
[
"[Back to Table of Contents](#toc)\n## Visualize customer demographics and locations\n\nNow you are ready to explore the customer base. Using simple charts, you can quickly see these characteristics:\n * Customer demographics (gender and age)\n * Customer locations (city, state, and country)\n\nYou will create charts with the PixieDust library:\n\n - [View customers by gender in a pie chart](#View-customers-by-gender-in-a-pie-chart)\n - [View customers by generation in a bar chart](#View-customers-by-generation-in-a-bar-chart)\n - [View customers by age in a histogram chart](#View-customers-by-age-in-a-histogram-chart)\n - [View specific information with a filter function](#View-specific-information-with-a-filter-function)\n - [View customer density by location with a map](#View-customer-density-by-location-with-a-map)",
"_____no_output_____"
],
[
"### View customers by gender in a pie chart\n\nRun the `display()` command and then configure the graph to show the percentages of male and female customers:\n\n1. Run the next cell. The PixieDust interactive widget appears. \n1. Click the chart button and choose **Pie Chart**. The chart options tool appears.\n1. In the chart options, drag `GENDER` into the **Keys** box. \n1. In the **Aggregation** field, choose **COUNT**. \n1. Increase the **# of Rows to Display** to a very large number to display all data.\n1. Click **OK**. The pie chart appears.\n\nIf you want to make further changes, click **Options** to return to the chart options tool.",
"_____no_output_____"
]
],
[
[
"display(customer_df)",
"_____no_output_____"
]
],
[
[
"[Back to Table of Contents](#toc)\n### View customers by generation in a bar chart\nLook at how many customers you have per \"generation.\"\n\nRun the next cell and configure the graph: \n1. Choose **Bar Chart** as the chart type and configure the chart options as instructed below.\n2. Put `GENERATION` into the **Keys** box.\n3. Set **aggregation** to `COUNT`.\n1. Increase the **# of Rows to Display** to a very large number to display all data.\n4. Click **OK**\n4. Change the **Renderer** at the top right of the chart to explore different visualisations. \n4. You can use clustering to group customers, for example by geographic location. To group generations by country, select `COUNTRY_CODE` from the **Cluster by** list from the menu on the left of the chart. ",
"_____no_output_____"
]
],
[
[
"display(customer_df)",
"_____no_output_____"
]
],
[
[
"[Back to Table of Contents](#toc)\n### View customers by age in a histogram chart\nA generation is a broad age range. You can look at a smaller age range with a histogram chart. A histogram is like a bar chart except each bar represents a range of numbers, called a bin. You can customize the size of the age range by adjusting the bin size. The more bins you specify, the smaller the age range.\n\nRun the next cell and configure the graph:\n1. Choose **Histogram** as the chart type. \n2. Put `AGE` into the **Values** box.\n1. Increase the **# of Rows to Display** to a very large number to display all data.\n1. Click **OK**.\n3. Use the **Bin count** slider to specify the number of the bins. Try starting with 40.",
"_____no_output_____"
]
],
[
[
"display(customer_df)",
"_____no_output_____"
]
],
[
[
"[Back to Table of Contents](#toc)\n### View specific information with a filter function\n\nYou can filter records to restrict analysis by using the [PySpark DataFrame](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame) `filter()` function.\n\nIf you want to view the age distribution for a specific generation, uncomment the desired filter condition and run the next cell:",
"_____no_output_____"
]
],
[
[
"# Data subsetting: display age distribution for a specific generation\n# (Chart type: histogram, Chart Options > Values: AGE)\n# to change the filter condition remove the # sign \ncondition = \"GENERATION = 'Baby_Boomers'\"\n#condition = \"GENERATION = 'Gen_X'\"\n#condition = \"GENERATION = 'Gen_Y'\"\n#condition = \"GENERATION = 'Gen_Z'\"\nboomers_df = customer_df.filter(condition)\ndisplay(boomers_df)",
"_____no_output_____"
]
],
[
[
"PixieDust supports basic filtering to make it easy to analyse data subsets. For example, to view the age distribution for a specific gender configure the chart as follows:\n\n 1. Choose `Histogram` as the chart type.\n 2. Put `AGE` into the **Values** box and click OK.\n 3. Click the filter button (looking like a funnel), and choose **GENDER** as field and `female` as value.\n \nThe filter is only applied to the working data set and does not modify the input `customer_df`.\n",
"_____no_output_____"
]
],
[
[
"display(customer_df)",
"_____no_output_____"
]
],
[
[
"You can also filter by location. For example, the following command creates a new DataFrame that filters for customers from the USA:",
"_____no_output_____"
]
],
[
[
"condition = \"COUNTRY_CODE = 'US'\"\nus_customer_df = customer_df.filter(condition)",
"_____no_output_____"
]
],
[
[
"You can pivot your analysis perspective based on aspects that are of interest to you by choosing different keys and clusters.\n\nCreate a bar chart and cluster the data.\n\nRun the next cell and configure the graph:\n1. Choose **Bar chart** as the chart type.\n2. Put `COUNTRY_CODE` into the **Keys** box.\n4. Set Aggregation to **COUNT**.\n5. Click **OK**. The chart displays the number of US customers.\n6. From the **Cluster By** list, choose **GENDER**. The chart shows the number of customers by gender.",
"_____no_output_____"
]
],
[
[
"display(us_customer_df)",
"_____no_output_____"
]
],
[
[
"Now try to cluster the customers by state.\n\nA bar chart isn't the best way to show geographic location!",
"_____no_output_____"
],
[
"[Back to Table of Contents](#toc)\n### View customer density by location with a map\nMaps are a much better way to view location data than other chart types. \n\nVisualize customer density by US state with a map.\n\nRun the next cell and configure the graph:\n1. Choose **Map** as the chart type.\n2. Put `STATE` into the **Keys** box.\n4. Set Aggregation to **COUNT**.\n5. Click **OK**. The map displays the number of US customers.\n6. From the **Renderer** list, choose **brunel**.\n\n > PixieDust supports three map renderers: brunel, [mapbox](https://www.mapbox.com/) and Google. Note that the Mapbox renderer and the Google renderer require an API key or access token and supported features vary by renderer.\n\n7. You can explore more about customers in each state by changing the aggregation method, for example look at customer age ranges (avg, minimum, and maximum) by state. Simply Change the aggregation function to `AVG`, `MIN`, or `MAX` and choose `AGE` as value. \n",
"_____no_output_____"
]
],
[
[
"display(us_customer_df)",
"_____no_output_____"
]
],
[
[
"[Back to Table of Contents](#toc)\n## Enrich demographic information with open data\nYou can easily combine other sources of data with your existing data. There is a lot of publicly available open data sets that can be very helpful. For example, knowing the approximate income level of your customers might help you target your marketing campaigns.\n\nRun the next cell to load [this data set](https://apsportal.ibm.com/exchange/public/entry/view/beb8c30a3f559e58716d983671b70337) from the United States Census Bureau into your notebook. The data set contains US household income statistics compiled at the zip code geography level.",
"_____no_output_____"
]
],
[
[
"# Load median income information for all US ZIP codes from a public source\nincome_df = pixiedust.sampleData('https://raw.githubusercontent.com/IBM/analyze-customer-data-spark-pixiedust/master/data/x19_income_select.csv')",
"_____no_output_____"
],
[
"income_df.printSchema()",
"_____no_output_____"
]
],
[
[
"Now cleanse the income data set to remove the data that you don't need. Create a new DataFrame for this data:\n - The zip code, extracted from the GEOID column.\n - The column B19049e1, which contains the median household income for 2013.",
"_____no_output_____"
]
],
[
[
"# ------------------------------\n# Helper: Extract ZIP code\n# ------------------------------\ndef extractZIPCode(col):\n \"\"\" input: pyspark.sql.types.Column containing a geo code, like '86000US01001'\n output: ZIP code\n \"\"\"\n m = re.match('^\\d+US(\\d\\d\\d\\d\\d)$',col)\n if m:\n return m.group(1)\n else:\n return None \n \ngetZIPCodeUDF = func.udf(lambda c: extractZIPCode(c), types.StringType())\nincome_df = income_df.select('GEOID', 'B19049e1').withColumnRenamed('B19049e1', 'MEDIAN_INCOME_IN_ZIP').withColumn(\"ZIP\", getZIPCodeUDF(income_df['GEOID']))\nincome_df",
"_____no_output_____"
]
],
[
[
"Perform a left outer join on the customer data set with the income data set, using the zip code as the join condition. For the complete syntax of joins, go to the <a href=\"https://spark.apache.org/docs/1.5.2/api/python/pyspark.sql.html#pyspark.sql.DataFrame\" target=\"_blank\" rel=\"noopener noreferrer\">pyspark DataFrame documentation</a> and scroll down to the `join` syntax. ",
"_____no_output_____"
]
],
[
[
"us_customer_df = us_customer_df.join(income_df, us_customer_df.POSTAL_CODE == income_df.ZIP, 'left_outer').drop('GEOID').drop('ZIP')",
"_____no_output_____"
],
[
"display(us_customer_df)",
"_____no_output_____"
]
],
[
[
"Now you can visualize the income distribution of your customers by zip code.\n Visualize income distribution for our customers.\nRun the next cell and configure the graph:\n1. Choose **Histogram** as the chart type.\n2. Put `MEDIAN_INCOME_IN_ZIP` into the **Values** box and click **OK**.",
"_____no_output_____"
],
[
"The majority of your customers live in zip codes where the median income is around 40,000 USD. ",
"_____no_output_____"
],
[
"[Back to Table of Contents](#toc)\n",
"_____no_output_____"
],
[
"Copyright © 2017, 2018 IBM. This notebook and its source code are released under the terms of the MIT License.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0dad65a1368cf217654908bff868ba525c8c6c6 | 21,348 | ipynb | Jupyter Notebook | week06_policy_based/reinforce_lasagne.ipynb | kianya/Practical_RL | 6e4ee47a0b22622b12e9c3f4a3ffd18ae0c6337c | [
"Unlicense"
] | 3 | 2020-12-14T11:03:38.000Z | 2021-03-03T21:38:40.000Z | week06_policy_based/reinforce_lasagne.ipynb | kianya/Practical_RL | 6e4ee47a0b22622b12e9c3f4a3ffd18ae0c6337c | [
"Unlicense"
] | null | null | null | week06_policy_based/reinforce_lasagne.ipynb | kianya/Practical_RL | 6e4ee47a0b22622b12e9c3f4a3ffd18ae0c6337c | [
"Unlicense"
] | 1 | 2021-12-16T14:42:21.000Z | 2021-12-16T14:42:21.000Z | 38.956204 | 7,232 | 0.664793 | [
[
[
"# REINFORCE in lasagne\n\nJust like we did before for q-learning, this time we'll design a lasagne network to learn `CartPole-v0` via policy gradient (REINFORCE).\n\nMost of the code in this notebook is taken from approximate qlearning, so you'll find it more or less familiar and even simpler.",
"_____no_output_____"
],
[
"__Frameworks__ - we'll accept this homework in any deep learning framework. For example, it translates to TensorFlow almost line-to-line. However, we recommend you to stick to theano/lasagne unless you're certain about your skills in the framework of your choice.",
"_____no_output_____"
]
],
[
[
"%env THEANO_FLAGS = 'floatX=float32'\nimport os\nif type(os.environ.get(\"DISPLAY\")) is not str or len(os.environ.get(\"DISPLAY\")) == 0:\n !bash ../xvfb start\n os.environ['DISPLAY'] = ':1'",
"env: THEANO_FLAGS='floatX=float32'\n"
],
[
"import gym\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nenv = gym.make(\"CartPole-v0\").env\nenv.reset()\nn_actions = env.action_space.n\nstate_dim = env.observation_space.shape\n\nplt.imshow(env.render(\"rgb_array\"))",
"[2017-03-14 19:35:59,320] Making new env: CartPole-v0\n"
]
],
[
[
"# Building the network for REINFORCE",
"_____no_output_____"
],
[
"For REINFORCE algorithm, we'll need a model that predicts action probabilities given states.",
"_____no_output_____"
]
],
[
[
"import theano\nimport theano.tensor as T\n\n# create input variables. We'll support multiple states at once\n\nstates = T.matrix(\"states[batch,units]\")\nactions = T.ivector(\"action_ids[batch]\")\ncumulative_rewards = T.vector(\"G[batch] = r + gamma*r' + gamma^2*r'' + ...\")",
"_____no_output_____"
],
[
"import lasagne\nfrom lasagne.layers import *\n\n# input layer\nl_states = InputLayer((None,)+state_dim, input_var=states)\n\n\n<Your architecture. Please start with a 1-2 layers with 50-200 neurons >\n\n# output layer\n# this time we need to predict action probabilities,\n# so make sure your nonlinearity forces p>0 and sum_p = 1\nl_action_probas = DenseLayer( < ... > ,\n num_units= < ... > ,\n nonlinearity= < ... > )",
"_____no_output_____"
]
],
[
[
"#### Predict function",
"_____no_output_____"
]
],
[
[
"# get probabilities of actions\npredicted_probas = get_output(l_action_probas)\n\n# predict action probability given state\n# if you use float32, set allow_input_downcast=True\npredict_proba = <compile a function that takes states and returns predicted_probas >",
"_____no_output_____"
]
],
[
[
"#### Loss function and updates\n\nWe now need to define objective and update over policy gradient.\n\nOur objective function is\n\n$$ J \\approx { 1 \\over N } \\sum _{s_i,a_i} \\pi_\\theta (a_i | s_i) \\cdot G(s_i,a_i) $$\n\n\nFollowing the REINFORCE algorithm, we can define our objective as follows: \n\n$$ \\hat J \\approx { 1 \\over N } \\sum _{s_i,a_i} log \\pi_\\theta (a_i | s_i) \\cdot G(s_i,a_i) $$\n\nWhen you compute gradient of that function over network weights $ \\theta $, it will become exactly the policy gradient.\n",
"_____no_output_____"
]
],
[
[
"# select probabilities for chosen actions, pi(a_i|s_i)\npredicted_probas_for_actions = predicted_probas[T.arange(\n actions.shape[0]), actions]",
"_____no_output_____"
],
[
"# REINFORCE objective function\nJ = # <policy objective as in the last formula. Please use mean, not sum.>",
"_____no_output_____"
],
[
"# all network weights\nall_weights = <get all \"thetas\" aka network weights using lasagne >\n\n# weight updates. maximize J = minimize -J\nupdates = lasagne.updates.sgd(-J, all_weights, learning_rate=0.01)",
"_____no_output_____"
],
[
"train_step = theano.function([states, actions, cumulative_rewards], updates=updates,\n allow_input_downcast=True)",
"_____no_output_____"
]
],
[
[
"### Computing cumulative rewards",
"_____no_output_____"
]
],
[
[
"\n\ndef get_cumulative_rewards(rewards, # rewards at each step\n gamma=0.99 # discount for reward\n ):\n \"\"\"\n take a list of immediate rewards r(s,a) for the whole session \n compute cumulative returns (a.k.a. G(s,a) in Sutton '16)\n G_t = r_t + gamma*r_{t+1} + gamma^2*r_{t+2} + ...\n\n The simple way to compute cumulative rewards is to iterate from last to first time tick\n and compute G_t = r_t + gamma*G_{t+1} recurrently\n\n You must return an array/list of cumulative rewards with as many elements as in the initial rewards.\n \"\"\"\n\n <your code here >\n\n return < array of cumulative rewards >",
"_____no_output_____"
],
[
"assert len(get_cumulative_rewards(range(100))) == 100\nassert np.allclose(get_cumulative_rewards([0, 0, 1, 0, 0, 1, 0], gamma=0.9), [\n 1.40049, 1.5561, 1.729, 0.81, 0.9, 1.0, 0.0])\nassert np.allclose(get_cumulative_rewards(\n [0, 0, 1, -2, 3, -4, 0], gamma=0.5), [0.0625, 0.125, 0.25, -1.5, 1.0, -4.0, 0.0])\nassert np.allclose(get_cumulative_rewards(\n [0, 0, 1, 2, 3, 4, 0], gamma=0), [0, 0, 1, 2, 3, 4, 0])\nprint(\"looks good!\")",
"_____no_output_____"
]
],
[
[
"### Playing the game",
"_____no_output_____"
]
],
[
[
"def generate_session(t_max=1000):\n \"\"\"play env with REINFORCE agent and train at the session end\"\"\"\n\n # arrays to record session\n states, actions, rewards = [], [], []\n\n s = env.reset()\n\n for t in range(t_max):\n\n # action probabilities array aka pi(a|s)\n action_probas = predict_proba([s])[0]\n\n a = <sample action with given probabilities >\n\n new_s, r, done, info = env.step(a)\n\n # record session history to train later\n states.append(s)\n actions.append(a)\n rewards.append(r)\n\n s = new_s\n if done:\n break\n\n cumulative_rewards = get_cumulative_rewards(rewards)\n train_step(states, actions, cumulative_rewards)\n\n return sum(rewards)",
"_____no_output_____"
],
[
"for i in range(100):\n\n rewards = [generate_session() for _ in range(100)] # generate new sessions\n\n print(\"mean reward:%.3f\" % (np.mean(rewards)))\n\n if np.mean(rewards) > 300:\n print(\"You Win!\")\n break",
"mean reward:20.900\nmean reward:35.860\nmean reward:50.820\nmean reward:88.550\nmean reward:132.080\nmean reward:165.890\nmean reward:193.790\nmean reward:166.510\nmean reward:120.910\nmean reward:98.450\nmean reward:236.340\nmean reward:280.410\nmean reward:317.610\nYou Win!\n"
]
],
[
[
"### Video",
"_____no_output_____"
]
],
[
[
"# record sessions\nimport gym.wrappers\nenv = gym.wrappers.Monitor(gym.make(\"CartPole-v0\"),\n directory=\"videos\", force=True)\nsessions = [generate_session() for _ in range(100)]\nenv.close()",
"[2017-03-14 19:36:45,862] Making new env: CartPole-v0\n[2017-03-14 19:36:45,870] DEPRECATION WARNING: env.spec.timestep_limit has been deprecated. Replace your call to `env.spec.timestep_limit` with `env.spec.tags.get('wrapper_config.TimeLimit.max_episode_steps')`. This change was made 12/28/2016 and is included in version 0.7.0\n[2017-03-14 19:36:45,873] Clearing 12 monitor files from previous run (because force=True was provided)\n[2017-03-14 19:36:45,894] Starting new video recorder writing to /home/jheuristic/Downloads/Practical_RL/week6/videos/openaigym.video.0.7776.video000000.mp4\n[2017-03-14 19:36:51,516] Starting new video recorder writing to /home/jheuristic/Downloads/Practical_RL/week6/videos/openaigym.video.0.7776.video000001.mp4\n[2017-03-14 19:36:57,580] Starting new video recorder writing to /home/jheuristic/Downloads/Practical_RL/week6/videos/openaigym.video.0.7776.video000008.mp4\n[2017-03-14 19:37:05,049] Starting new video recorder writing to /home/jheuristic/Downloads/Practical_RL/week6/videos/openaigym.video.0.7776.video000027.mp4\n[2017-03-14 19:37:08,785] Starting new video recorder writing to /home/jheuristic/Downloads/Practical_RL/week6/videos/openaigym.video.0.7776.video000064.mp4\n[2017-03-14 19:37:11,505] Finished writing results. You can upload them to the scoreboard via gym.upload('/home/jheuristic/Downloads/Practical_RL/week6/videos')\n"
],
[
"# show video\nfrom IPython.display import HTML\nimport os\n\nvideo_names = list(\n filter(lambda s: s.endswith(\".mp4\"), os.listdir(\"./videos/\")))\n\nHTML(\"\"\"\n<video width=\"640\" height=\"480\" controls>\n <source src=\"{}\" type=\"video/mp4\">\n</video>\n\"\"\".format(\"./videos/\"+video_names[-1])) # this may or may not be _last_ video. Try other indices",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0dadc79ddeaedb2ef5fe5d8cf91cec866c8a620 | 45,592 | ipynb | Jupyter Notebook | CognitiveAPI.ipynb | hfleitas/GlobalAIBootcamp2019 | 2495659aeba5cc8b7ce6af2f034f02b4adbfa14c | [
"MIT"
] | 1 | 2021-08-15T14:41:26.000Z | 2021-08-15T14:41:26.000Z | CognitiveAPI.ipynb | hfleitas/GlobalAIBootcamp2019 | 2495659aeba5cc8b7ce6af2f034f02b4adbfa14c | [
"MIT"
] | null | null | null | CognitiveAPI.ipynb | hfleitas/GlobalAIBootcamp2019 | 2495659aeba5cc8b7ce6af2f034f02b4adbfa14c | [
"MIT"
] | null | null | null | 69.712538 | 4,382 | 0.511318 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d0dadfd38557e277199e5dcecd112f30ade04eb7 | 18,382 | ipynb | Jupyter Notebook | .ipynb_checkpoints/temp_analysis_bonus_2_starter - Copy-checkpoint.ipynb | pratixashah/sqlalchemy | 5848c186202a6c7236812f9a37efaf3739f06256 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/temp_analysis_bonus_2_starter - Copy-checkpoint.ipynb | pratixashah/sqlalchemy | 5848c186202a6c7236812f9a37efaf3739f06256 | [
"MIT"
] | null | null | null | .ipynb_checkpoints/temp_analysis_bonus_2_starter - Copy-checkpoint.ipynb | pratixashah/sqlalchemy | 5848c186202a6c7236812f9a37efaf3739f06256 | [
"MIT"
] | null | null | null | 49.281501 | 9,232 | 0.752312 | [
[
[
"%matplotlib inline\nfrom matplotlib import style\nstyle.use('fivethirtyeight')\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"import numpy as np\nimport pandas as pd\nimport datetime as dt",
"_____no_output_____"
]
],
[
[
"## Reflect Tables into SQLALchemy ORM",
"_____no_output_____"
]
],
[
[
"# Python SQL toolkit and Object Relational Mapper\nimport sqlalchemy\nfrom sqlalchemy.ext.automap import automap_base\nfrom sqlalchemy.orm import Session\nfrom sqlalchemy import create_engine, func, inspect",
"_____no_output_____"
],
[
"# create engine to hawaii.sqlite\nengine = create_engine(\"sqlite:///./Resources/hawaii.sqlite\")",
"_____no_output_____"
],
[
"# reflect an existing database into a new model\n\ninspector = inspect(engine)\n\n# reflect the tables\ninspector.get_table_names()",
"_____no_output_____"
],
[
"# View all of the classes that automap found\n\ncolumns = inspector.get_columns('measurement')\nprint('\\nmeasurement:')\nfor c in columns:\n print(c['name'], c[\"type\"])\n \ncolumns = inspector.get_columns('station')\nprint('\\nstation:')\nfor c in columns:\n print(c['name'], c[\"type\"])",
"\nmeasurement:\nid INTEGER\nstation TEXT\ndate TEXT\nprcp FLOAT\ntobs FLOAT\n\nstation:\nid INTEGER\nstation TEXT\nname TEXT\nlatitude FLOAT\nlongitude FLOAT\nelevation FLOAT\n"
],
[
"# Save references to each table\n\nBase = automap_base()\nBase.prepare(engine, reflect=True)\nBase.classes.keys()\n\n\nMeasurement = Base.classes.measurement\nStation = Base.classes.station",
"_____no_output_____"
],
[
"# Create our session (link) from Python to the DB\nsession = Session(bind=engine)",
"_____no_output_____"
]
],
[
[
"## Bonus Challenge Assignment: Temperature Analysis II",
"_____no_output_____"
]
],
[
[
"# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d' \n# and return the minimum, maximum, and average temperatures for that range of dates\ndef calc_temps(start_date, end_date):\n \"\"\"TMIN, TAVG, and TMAX for a list of dates.\n \n Args:\n start_date (string): A date string in the format %Y-%m-%d\n end_date (string): A date string in the format %Y-%m-%d\n \n Returns:\n TMIN, TAVE, and TMAX\n \"\"\"\n \n return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\\\n filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()\n\n# For example\nprint(calc_temps('2012-02-28', '2012-03-05'))",
"[(62.0, 69.57142857142857, 74.0)]\n"
],
[
"# Use the function `calc_temps` to calculate the tmin, tavg, and tmax \n# for a year in the data set\ntemp_data = calc_temps('2017-08-01','2017-08-07')\nprint(temp_data)",
"[(72.0, 79.25, 83.0)]\n"
],
[
"# Plot the results from your previous query as a bar chart. \n# Use \"Trip Avg Temp\" as your Title\n# Use the average temperature for bar height (y value)\n# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)\n\n\navg_temp = temp_data[0][1]\nerr = [temp_data[0][2] - temp_data[0][0]]\n\nfig, ax = plt.subplots(figsize=(3,6))\nax.bar(1,height=[avg_temp], yerr=err, align='center', alpha=0.5, ecolor='black')\n\nax.set_ylabel('Temp(F)',fontsize=12)\nax.set_title('Trip Avg Temp', fontsize=15)\n\nax.set_xticklabels([])\nax.yaxis.grid(True)\nax.xaxis.grid(False)\nplt.tight_layout()\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Daily Rainfall Average",
"_____no_output_____"
]
],
[
[
"# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's \n# matching dates.\n# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation\n",
"_____no_output_____"
],
[
"# Use this function to calculate the daily normals \n# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)\n\ndef daily_normals(date):\n \"\"\"Daily Normals.\n \n Args:\n date (str): A date string in the format '%m-%d'\n \n Returns:\n A list of tuples containing the daily normals, tmin, tavg, and tmax\n \n \"\"\"\n \n sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]\n return session.query(*sel).filter(func.strftime(\"%m-%d\", Measurement.date) == date).all()\n\n# For example\ndaily_normals(\"01-01\")",
"_____no_output_____"
],
[
"# calculate the daily normals for your trip\n# push each tuple of calculations into a list called `normals`\n\n# Set the start and end date of the trip\nstart_date = '2017-08-01'\nend_date = '2017-08-07'\n\n# Use the start and end date to create a range of dates\n\n\n# Strip off the year and save a list of strings in the format %m-%d\n\n\n# Use the `daily_normals` function to calculate the normals for each date string \n# and append the results to a list called `normals`.\n",
"_____no_output_____"
],
[
"# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index",
"_____no_output_____"
],
[
"# Plot the daily normals as an area plot with `stacked=False`",
"_____no_output_____"
]
],
[
[
"## Close Session",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0daf055e815da41c7713f6895cbe0daa540d7b5 | 10,985 | ipynb | Jupyter Notebook | sklearn/notes/parameter_tuning_ex_03.ipynb | shamik-biswas-rft/CodeSnippets | d27621ab65fd9bd1c5195db028eb5a42a469d420 | [
"MIT"
] | 1 | 2022-03-08T11:28:58.000Z | 2022-03-08T11:28:58.000Z | sklearn/notes/parameter_tuning_ex_03.ipynb | shamik-biswas-rft/CodeSnippets | d27621ab65fd9bd1c5195db028eb5a42a469d420 | [
"MIT"
] | null | null | null | sklearn/notes/parameter_tuning_ex_03.ipynb | shamik-biswas-rft/CodeSnippets | d27621ab65fd9bd1c5195db028eb5a42a469d420 | [
"MIT"
] | 1 | 2022-03-08T10:28:33.000Z | 2022-03-08T10:28:33.000Z | 34.328125 | 116 | 0.568138 | [
[
[
"# 📝 Exercise M3.02\n\nThe goal is to find the best set of hyperparameters which maximize the\ngeneralization performance on a training set.\n\nHere again with limit the size of the training set to make computation\nrun faster. Feel free to increase the `train_size` value if your computer\nis powerful enough.",
"_____no_output_____"
]
],
[
[
"\nimport numpy as np\nimport pandas as pd\n\nadult_census = pd.read_csv(\"../datasets/adult-census.csv\")\n\ntarget_name = \"class\"\ntarget = adult_census[target_name]\ndata = adult_census.drop(columns=[target_name, \"education-num\"])\nfrom sklearn.model_selection import train_test_split\n\ndata_train, data_test, target_train, target_test = train_test_split(\n data, target, train_size=0.2, random_state=42)",
"_____no_output_____"
]
],
[
[
"In this exercise, we will progressively define the classification pipeline\nand later tune its hyperparameters.\n\nOur pipeline should:\n* preprocess the categorical columns using a `OneHotEncoder` and use a\n `StandardScaler` to normalize the numerical data.\n* use a `LogisticRegression` as a predictive model.\n\nStart by defining the columns and the preprocessing pipelines to be applied\non each group of columns.",
"_____no_output_____"
]
],
[
[
"from sklearn.compose import make_column_selector as selector\n\n# Write your code here.\ncategorical_selector = selector(dtype_include=object)\nnumerical_selector = selector(dtype_exclude=object)\n\ncategorical_columns = categorical_selector(data)\nnumerical_columns = numerical_selector(data)",
"_____no_output_____"
],
[
"from sklearn.preprocessing import OneHotEncoder, StandardScaler\n\n# Write your code here.\ncat_processor = OneHotEncoder(handle_unknown='ignore')\nnum_processor = StandardScaler()",
"_____no_output_____"
]
],
[
[
"Subsequently, create a `ColumnTransformer` to redirect the specific columns\na preprocessing pipeline.",
"_____no_output_____"
]
],
[
[
"from sklearn.compose import ColumnTransformer\n\n# Write your code here.\npreprocessor = ColumnTransformer(\n[\n ('cat_process', cat_processor, categorical_columns),\n ('num_process', num_processor, numerical_columns)\n])",
"_____no_output_____"
]
],
[
[
"Assemble the final pipeline by combining the above preprocessor\nwith a logistic regression classifier. Force the maximum number of\niterations to `10_000` to ensure that the model will converge.",
"_____no_output_____"
]
],
[
[
"from sklearn.pipeline import make_pipeline\nfrom sklearn.linear_model import LogisticRegression\n\n# Write your code here.\nmodel = make_pipeline(preprocessor, LogisticRegression(max_iter=11_000))",
"_____no_output_____"
]
],
[
[
"Use `RandomizedSearchCV` with `n_iter=20` to find the best set of\nhyperparameters by tuning the following parameters of the `model`:\n\n- the parameter `C` of the `LogisticRegression` with values ranging from\n 0.001 to 10. You can use a log-uniform distribution\n (i.e. `scipy.stats.loguniform`);\n- the parameter `with_mean` of the `StandardScaler` with possible values\n `True` or `False`;\n- the parameter `with_std` of the `StandardScaler` with possible values\n `True` or `False`.\n\nOnce the computation has completed, print the best combination of parameters\nstored in the `best_params_` attribute.",
"_____no_output_____"
]
],
[
[
"model.get_params()",
"_____no_output_____"
],
[
"from sklearn.model_selection import RandomizedSearchCV\nfrom scipy.stats import loguniform\n\n# Write your code here.\nparams_dict = {\n 'columntransformer__num_process__with_mean': [True, False],\n 'columntransformer__num_process__with_std': [True, False],\n 'logisticregression__C': loguniform(1e-3, 10)\n}\n\nmodel_random_search = RandomizedSearchCV(model,\n param_distributions= params_dict,\n n_iter=20, error_score='raise',\n n_jobs=-1, verbose=1)\nmodel_random_search.fit(data_train, target_train)\nmodel_random_search.best_params_",
"Fitting 5 folds for each of 20 candidates, totalling 100 fits\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0daf681b998fededf62c48ffba21d1a92331f50 | 136,574 | ipynb | Jupyter Notebook | plot_figs/fig03/fig03.ipynb | apaloczy/InnerShelfReynoldsStresses | daacb2fffd0a2017ffe8c0a329217972415e96cd | [
"MIT"
] | 3 | 2021-01-12T01:20:40.000Z | 2021-12-11T08:18:02.000Z | plot_figs/fig03/fig03.ipynb | apaloczy/InnerShelfReynoldsStresses | daacb2fffd0a2017ffe8c0a329217972415e96cd | [
"MIT"
] | null | null | null | plot_figs/fig03/fig03.ipynb | apaloczy/InnerShelfReynoldsStresses | daacb2fffd0a2017ffe8c0a329217972415e96cd | [
"MIT"
] | 2 | 2021-05-06T22:52:04.000Z | 2021-12-11T08:18:03.000Z | 500.271062 | 127,224 | 0.935932 | [
[
[
"# Description: Plot Figure 3 (Overview of wind, wave and density stratification during the field experiment).\n# Author: André Palóczy\n# E-mail: [email protected]\n# Date: December/2020",
"_____no_output_____"
],
[
"import numpy as np\nfrom matplotlib import pyplot as plt\nimport matplotlib.dates as mdates\nfrom pandas import Timestamp\nfrom xarray import open_dataset, DataArray\nfrom dewaveADCP.utils import fourfilt",
"_____no_output_____"
],
[
"def fitN2(T, z, g=9.8, alpha=2e-4):\n fg = ~np.isnan(T)\n p = np.polyfit(z[fg], T[fg], 1)\n Tfit = np.polyval(p, z)\n dTdz = (Tfit[0] - Tfit[-1])/z.ptp()\n\n N = np.sqrt(g*alpha*np.abs(dTdz)) # [1/s].\n\n return N, Tfit",
"_____no_output_____"
],
[
"plt. close('all')\n\nhead = \"../../data_reproduce_figs/\"\nds = open_dataset(head+\"windstress-wave-tide.nc\")\ntwind = ds['twind'].values\ntwave = ds['twave'].values\nttide = ds['ttide'].values\ntaux, tauy = ds['taux'].values, ds['tauy'].values\nHs = ds['Hs'].values\nTp = ds['Tp'].values\nmeandir = ds['wavedir'].values\nmeandirspread = ds['wavespread'].values",
"_____no_output_____"
],
[
"# Low-pass filter wind stress.\ndts_wind = 2*60 # 2 min sampling frequency.\nTmax = dts_wind*taux.size*2\nTmin = 60*60*30 # 30 h low-pass filter.\ntaux = fourfilt(taux, dts_wind, Tmax, Tmin)\ntauy = fourfilt(tauy, dts_wind, Tmax, Tmin)\n\n\ndsSIO = open_dataset(head+\"windstress-SIOMiniMETBuoy.nc\")\ntwindSIO = dsSIO['taux']['t']\ntauxSIO, tauySIO = dsSIO['taux'].values, dsSIO['tauy'].values\ndts_windSIO = 60*60 # 1 h averages.\ntauxSIO = fourfilt(tauxSIO, dts_windSIO, Tmax, Tmin)\ntauySIO = fourfilt(tauySIO, dts_windSIO, Tmax, Tmin)\n\ntl, tr = Timestamp('2017-09-08'), Timestamp('2017-11-01')",
"_____no_output_____"
],
[
"# Plot wind and wave variables.\nshp = (4, 2)\nfig = plt.figure(figsize=(7, 7))\nax1 = plt.subplot2grid(shp, (0, 0), colspan=2, rowspan=1)\nax2 = plt.subplot2grid(shp, (1, 0), colspan=2, rowspan=1)\nax3 = plt.subplot2grid(shp, (2, 0), colspan=2, rowspan=1)\nax4 = plt.subplot2grid(shp, (3, 0), colspan=1, rowspan=1) # T profiles from different moorings.\nax5 = plt.subplot2grid(shp, (3, 1), colspan=1, rowspan=1) # Top-bottom temperature difference.\n\nax1.plot(twindSIO, tauxSIO, color='gray', linestyle='--')\nax1.plot(twindSIO, tauySIO, color='k', linestyle='--')\nax1.plot(twind, taux, color='gray', label=r\"$\\tau_x$\")\nax1.plot(twind, tauy, color='k', label=r\"$\\tau_y$\")\nax1.axhline(color='k', linewidth=1)\nax1.set_ylabel('Wind stress [Pa]', fontsize=13)\nax1.legend(frameon=False, loc=(0.9, -0.01), handlelength=0.8)\nax1.set_ylim(-0.1, 0.1)\n\nax2.plot(twave, Hs, 'r', label=r'$H_s$')\nax2r = ax2.twinx()\nax2r.plot(twave, Tp, 'b', label=r'$T_p$')\nax2.set_ylabel(r'$H_s$ [m]', fontsize=15, color='r')\nax2r.set_ylabel(r'Peak period [s]', fontsize=13, color='b')\nax2r.spines['right'].set_color('b')\nax2r.spines['left'].set_color('r')\nax2.tick_params(axis='y', colors='r')\nax2r.tick_params(axis='y', colors='b')\n\nax3.fill_between(twave, meandir-meandirspread, meandir+meandirspread, color='k', alpha=0.2)\nax3.plot(twave, meandir, 'k')\nax3.set_ylim(240, 360)\nax3.set_ylabel(r'Wave direction [$^\\circ$]', fontsize=12)\n\nax1.xaxis.set_ticklabels([])\nax2.xaxis.set_ticklabels([])\nax1.set_xlim(tl, tr)\nax2.set_xlim(tl, tr)\nax3.set_xlim(tl, tr)\nfig.subplots_adjust(hspace=0.3)\nax2.axes.xaxis.set_tick_params(rotation=10)\n\nax1.text(0.01, 0.85, r'(a)', fontsize=13, transform=ax1.transAxes)\nax2.text(0.01, 0.85, r'(b)', fontsize=13, transform=ax2.transAxes)\nax3.text(0.01, 0.85, r'(c)', fontsize=13, transform=ax3.transAxes)\n\n\nbbox = ax2.get_position()\noffset = 0.04\nax2.set_position([bbox.x0, bbox.y0 + offset, bbox.x1-bbox.x0, bbox.y1 - bbox.y0])\n\nbbox = ax3.get_position()\noffset = 0.08\nax3.set_position([bbox.x0, bbox.y0 + offset, bbox.x1-bbox.x0, bbox.y1 - bbox.y0])\n\n\nlocator = mdates.AutoDateLocator(minticks=12, maxticks=24)\nfmts = ['', '%Y', '%Y', '%Y', '%Y', '%Y %H:%M']\nformatter = mdates.ConciseDateFormatter(locator, offset_formats=fmts)\nax3.xaxis.set_major_locator(locator)\nax3.xaxis.set_major_formatter(formatter)\n\n# Panel with all T profiles.\nwanted_ids = ['OC25M', 'OC25SA', 'OC25SB', 'OC40S', 'OC40N']\n\ncol = dict(OC25M='k', OC25SA='r', OC25SB='m', OC40S='b', OC40N='c')\nfor id in wanted_ids:\n ds = open_dataset(head+\"Tmean-\"+id+\".nc\")\n T, zab = ds[\"Tmean\"].values, ds[\"z\"].values\n ax4.plot(T, zab, linestyle='none', marker='o', ms=5, mfc=col[id], mec=col[id], label=id)\nax4.legend(loc='upper left', bbox_to_anchor=(-0.05, 1.02), frameon=False, fontsize=10, labelspacing=0.01, handletextpad=0, borderpad=0, bbox_transform=ax4.transAxes)\nax4.set_xlabel(r'$T$ [$^o$C]', fontsize=13)\nax4.set_ylabel(r'zab [m]', fontsize=13)\n\n# Fit a line to each mooring to estimate N2.\nNavg = 0\nfor id in wanted_ids:\n ds = open_dataset(head+\"Tmean-\"+id+\".nc\")\n T, zab = ds[\"Tmean\"].values, ds[\"z\"].values\n N, Tfit = fitN2(T, zab)\n txt = r\"$%s --> %.2f \\times 10^{-2}$ s$^{-1}$\"%(id, N*100)\n print(txt)\n Navg += N\nNavg /= len(wanted_ids)\n\n# Time series of top-to-bottom T difference.\nTstrat = open_dataset(head+\"Tstrat-OC25M.nc\")\nTstrat, tt = Tstrat[\"Tstrat\"].values, Tstrat[\"t\"].values\nax5.plot(tt, Tstrat, 'k')\nax1.yaxis.set_ticks([-0.1, -0.075, -0.05, -0.025, 0, 0.025, 0.05, 0.075, 0.1])\nax2r.yaxis.set_ticks([5, 10, 15, 20, 25])\nax4.yaxis.set_ticks([0, 10, 20, 30, 40])\nax5.yaxis.set_ticks(np.arange(7))\n\nax5.set_xlim(tl, tr)\nax5.yaxis.tick_right()\nax5.yaxis.set_label_position(\"right\")\nax5.set_ylabel(r'$T$ difference [$^o$C]', fontsize=13)\n\n\nlocator = mdates.AutoDateLocator()\nfmts = ['', '%Y', '%Y', '%Y', '%Y', '%Y %H:%M']\nformatter = mdates.ConciseDateFormatter(locator, offset_formats=fmts)\nax5.xaxis.set_major_locator(locator)\nax5.xaxis.set_major_formatter(formatter)\n\n\n\nax4.text(0.90, 0.1, '(d)', fontsize=13, transform=ax4.transAxes)\nax5.text(0.02, 0.1, '(e)', fontsize=13, transform=ax5.transAxes)\n\n\noffsetx = 0.03\noffsety = 0.065\nbbox = ax4.get_position()\nax4.set_position([bbox.x0 + offsetx, bbox.y0, bbox.x1-bbox.x0, bbox.y1 - bbox.y0 + offsety])\n\nbbox = ax5.get_position()\nax5.set_position([bbox.x0 - offsetx, bbox.y0, bbox.x1-bbox.x0, bbox.y1 - bbox.y0 + offsety])",
"$OC25M --> 1.46 \\times 10^{-2}$ s$^{-1}$\n$OC25SA --> 1.45 \\times 10^{-2}$ s$^{-1}$\n$OC25SB --> 1.42 \\times 10^{-2}$ s$^{-1}$\n$OC40S --> 1.25 \\times 10^{-2}$ s$^{-1}$\n$OC40N --> 1.20 \\times 10^{-2}$ s$^{-1}$\n"
],
[
"plt.show()\nfig.savefig(\"fig03.png\", dpi=300, bbox_inches='tight')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0dafa7290f388701847a48279e0e31d3266a10e | 1,450 | ipynb | Jupyter Notebook | lectures/difference-in-difference/notebook.ipynb | HumanCapitalAnalysis/ose-data-science | d5be68de68f170f8e8f11c9ed635b42f19100f87 | [
"MIT"
] | 62 | 2019-04-02T11:51:06.000Z | 2020-07-11T05:28:27.000Z | lectures/difference-in-difference/notebook.ipynb | HumanCapitalAnalysis/microeconometrics | d5be68de68f170f8e8f11c9ed635b42f19100f87 | [
"MIT"
] | 49 | 2019-04-05T10:57:07.000Z | 2020-07-07T20:41:19.000Z | lectures/difference-in-difference/notebook.ipynb | HumanCapitalAnalysis/ose-data-science | d5be68de68f170f8e8f11c9ed635b42f19100f87 | [
"MIT"
] | 46 | 2019-04-03T08:31:02.000Z | 2020-07-13T12:43:26.000Z | 29 | 252 | 0.601379 | [
[
[
"# Difference in Difference\n",
"_____no_output_____"
],
[
"### References\n\n* **Athey, S., & Imbens, G. (2021)**. [Design-based analysis in difference-in-differences settings with staggered adoption](https://www.sciencedirect.com/science/article/abs/pii/S0304407621000488), *Journal of Econometrics*.\n\n\n* **Bertrand, M., Dufflo, E., & Mullainathan, S. (2004)**. [How much should we trust differences-in-differences estimates?](https://ideas.repec.org/a/oup/qjecon/v119y2004i1p249-275..html), *The Quarterly Journal of Economics*, 119(1), 249-275.\n\n\n* **Goodman-Bacon, A. (2021)**. [Difference-in-differences with variation in treatment timing](https://www.sciencedirect.com/science/article/abs/pii/S0304407621001445), *Journal of Econometrics*, 255(2), 254-277.",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown"
]
] |
d0daff71dc82a3c07f61c9d1350263ce7f3ef291 | 11,279 | ipynb | Jupyter Notebook | notebooks/03-Environment-Challenge.ipynb | acwooding/easydata-tutorial | 0a810e6c622b26d6453e4ee1bded5be6d1e7b1d9 | [
"MIT"
] | 4 | 2021-10-30T19:19:32.000Z | 2021-11-16T22:29:30.000Z | notebooks/03-Environment-Challenge.ipynb | acwooding/easydata-tutorial | 0a810e6c622b26d6453e4ee1bded5be6d1e7b1d9 | [
"MIT"
] | 2 | 2021-10-29T20:30:38.000Z | 2021-12-23T15:01:09.000Z | notebooks/03-Environment-Challenge.ipynb | acwooding/easydata-tutorial | 0a810e6c622b26d6453e4ee1bded5be6d1e7b1d9 | [
"MIT"
] | 4 | 2021-10-30T06:50:22.000Z | 2022-03-30T01:21:18.000Z | 38.10473 | 499 | 0.631439 | [
[
[
"## Stage 3: What do I need to install?\nMaybe your experience looks like the typical python dependency management (https://xkcd.com/1987/):\n\n<img src=https://imgs.xkcd.com/comics/python_environment.png>\n\nFurthermore, data science packages can have all sorts of additional non-Python dependencies which makes things even more confusing, and we end up spending more time sorting out our dependencies than doing data science. If you take home nothing else out of this tutorial, learn this stage. I promise. It will save you, and everyone who works with you, many days of your life back.\n\n",
"_____no_output_____"
],
[
"### Reproducibility Issues:\n* (NO-ENVIRONMENT-INSTRUCTIONS) Chicken and egg issue with environments. No environment.yml file or the like. (Even if there are some instructions in a notebook).\n* (NO-VERSION-PIN) Versions not pinned. E.g. uses a dev branch without a clear indication of when it became released.\n* (IMPOSSIBLE-ENVIRONMENT) dependencies are not resolvable due to version clashes. (e.g. need <=0.48 and >=0.49)\n* (ARCH-DIFFERENCE) The same code runs differently on different architectures\n* (MONOLITHIC-ENVIRONMENT) One environment to rule (or fail) them all. \n",
"_____no_output_____"
],
[
"\n### Default Better Principles\n* **Use (at least) one virtual environment per repo**: And use the same name for the environment as the repo.\n* **Generate lock files**: Lock files include every single dependency in your dependency chain. Lock files are necessarily platform specific, so you need one per platform that you support. This way you have a perfect version pin on the environment that you used for that moment in time.\n* **Check in your environment creation instructions**: That means an `environment.yml` file for conda, and its matching lock file(s). ",
"_____no_output_____"
],
[
"## The Easydata way: `make create_environment`\nWe like `conda` for environment management since it's the least bad option for most data science workflows. There are no perfect ways of doing this. Here are some basics.\n\n",
"_____no_output_____"
],
[
"### Setting up your environment\n### clone the repo\n```\n git clone https://github.com/acwooding/easydata-tutorial\n cd easydata-tutorial\n```\n\n### Initial setup\n\n* **YOUR FIRST TASK OF THIS STAGE***: Check if there is a CONDA_EXE environment variable set with the full path to your conda binary; e.g. by doing the following:\n\n```\nexport | grep CONDA_EXE\n\n```\n* **NOTE:** if there is no CONDA_EXE, you will need to find your conda binary and record its location in the CONDA_EXE line of `Makefile.include`\n\nRecent versions of conda have made finding the actual binary harder than it should be. This might work:\n```\n >>> which conda\n ~/miniconda3/bin/conda\n```\n\n* Create and switch to the virtual environment:\n```\nmake create_environment\nconda activate easydata-tutorial\nmake update_environment\n```\n\nNow you're ready to run `jupyter notebook` (or jupyter lab) and explore the notebooks in the `notebooks` directory.\n\nFrom within jupyter, re-open this notebook and run the cells below.\n",
"_____no_output_____"
],
[
"**Your next Task**: Run the next cell to ensure that the packages got added to the python environment correctly.",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import StandardScaler\nimport matplotlib.pyplot as plt\nimport pandas as pd\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"### Updating your conda and pip environments\nThe `make` commands, `make create_environment` and `make update_environment` are wrappers that allow you to easily manage your conda and pip environments using a file called `environment.yml` file, which lists the packages you want in your python environment.\n\n(If you ever forget which `make` subcommand to run, you can run `make` by itself and it will provide a list of subcommands that are available.)\n\n\nWhen adding packages to your python environment, **never do a `pip install` or `conda install` directly**. Always edit `environment.yml` and `make update_environment` instead.\n\nYour `environment.yml` file will look something like this:\n```\nname: easydata-tutorial\n - pip\n - pip:\n - -e . # conda >= 4.4 only\n - python-dotenv>=0.5.1\n - nbval\n - nbdime\n - umap-learn\n - gdown\n - # Add more pip dependencies here\n - setuptools\n - wheel\n - git>=2.5 # for git worktree template updating\n - sphinx\n - bokeh\n - click\n - colorcet\n - coverage\n - coveralls\n - datashader\n - holoviews\n - matplotlib\n - jupyter\n - # Add more conda dependencies here\n...\n```\nNotice you can add conda and pip dependencies separately. For good reproducibility, we recommend you always try and use the conda version of a package if it is available.\n\nOnce you're done your edits, run `make update_environment` and voila, your python environment is up to date.\n\n**Git Bonus Task:** To save or share your updated environment, check in your `environment.yml` file using git.\n",
"_____no_output_____"
],
[
"**YOUR NEXT TASK** in the Quest: Updating your python environment to include the `seaborn` package. But first, a quick tip with using `conda` environments in notebooks:",
"_____no_output_____"
],
[
"#### Using your conda environment in a jupyter notebook\nIf you make a new notebook, and your packages don't seem to be available, make sure to select the `easydata-tutorial` Kernel from within the notebook. If you are somehow in another kernel, select **Kernel -> Change kernel -> Python[conda env:easydata-tutorial]**. If you don't seem to have that option, make sure that you ran `jupyter notebooks` with the `easydata-tutorial` conda environment enabled, and that `which jupyter` points to the correct (`easydata-tutorial`) version of jupyter.\n\nYou can see what's in your notebook's conda environment by putting the following in a cell and running it:",
"_____no_output_____"
]
],
[
[
"%conda info",
"_____no_output_____"
]
],
[
[
"Another useful cell to include is the following.\n\nIf you want your environment changes to be immediately available in your running notebooks, make sure to run a notebook cell containing:",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
]
],
[
[
"If you did your task correctly, the following import will succeed.",
"_____no_output_____"
]
],
[
[
"import seaborn as sns",
"_____no_output_____"
]
],
[
[
"Remember, you should **never** do a `pip install` or `conda install` manually. You want to make sure your environment changes are saved to your data science repo. Instead, edit `environment.yml` and do a `make update_environment`.",
"_____no_output_____"
],
[
"Your **NEXT TASK of this stage**: Run `make env_challenge` and follow the instructions if it works.",
"_____no_output_____"
],
[
"### BONUS Task: Lockfiles\n* Do this if there's time *\n\nLockfiles are a way of separating the list of \"packages I want\" from \"packages I need to install to make everything work\". For reproducibility reasons, we want to keep track of both files, but not in the same place. Usually, this separating is done with something called a \"lockfile.\"\n\nUnlike several other virtual environment managers, conda doesn't have lockfiles. To work around this limitation, Easydata generates a basic lockfile from `environment.yml` whenever you run `make update_environment`.\n\nThis lockfile is a file called `environment.{$ARCH}.lock.yml` (e.g. `environment.i386.lock.yml`). This file keeps a record of the exact environment that is currently installed in your conda environment `easydata-tutorial`. If you ever need to reproduce an environment exactly, you can install from the `.lock.yml` file. (Note: These are architecture dependent, so don't expect a mac lockfile to work on linux, and vice versa).\n\nFor more instructions on setting up and maintaining your python environment (including how to point your environment at your custom forks and work in progress) see [Setting up and Maintaining your Conda Environment Reproducibly](../reference/easydata/conda-environments.md).\n\n\n**Your BONUS Task** in the Quest: Take a look at the lockfile, and compare it's content to `environment.yml`. Then ask yourself, \"aren't I glad I don't have to maintain this list manually?\" ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
d0db297c9d3c88d111a54a5a8c2a032fa59299ea | 4,227 | ipynb | Jupyter Notebook | demo.ipynb | w-hc/coco_vis | cb21220e5845a2e5d7d8d5b4caa30e07c1004d05 | [
"MIT"
] | null | null | null | demo.ipynb | w-hc/coco_vis | cb21220e5845a2e5d7d8d5b4caa30e07c1004d05 | [
"MIT"
] | null | null | null | demo.ipynb | w-hc/coco_vis | cb21220e5845a2e5d7d8d5b4caa30e07c1004d05 | [
"MIT"
] | null | null | null | 20.519417 | 120 | 0.524722 | [
[
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"import pickle\nfrom coco_vis import Visualizer",
"_____no_output_____"
],
[
"img_directory='/share/data/vision-greg/coco/images/val2017'",
"_____no_output_____"
],
[
"with open('coco_2017_val.pkl', 'rb') as f:\n coco_eval = pickle.load(f, )",
"_____no_output_____"
]
],
[
[
"- First make sure that our notebook can display a simple widget. Some ppl might have trouble with this.",
"_____no_output_____"
]
],
[
[
"from ipywidgets import interact",
"_____no_output_____"
],
[
"def f(x):\n return x",
"_____no_output_____"
],
[
"interact(f, x=10)",
"_____no_output_____"
]
],
[
[
"- If you see a slider that can be dragged, it's good. \n- Resolve any widget related notebook setup problems before proceeding",
"_____no_output_____"
]
],
[
[
"vis = Visualizer(\n coco_eval, matching_threshold=0.5, area='all', img_directory=img_directory\n)",
"_____no_output_____"
]
],
[
[
"- the widget will disappear after each session. Have to re-run",
"_____no_output_____"
]
],
[
[
"vis.category_PR_curve()",
"_____no_output_____"
],
[
"vis.image_vis()",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0db351930f52e666106f53fb18ddff151e9e0ed | 17,004 | ipynb | Jupyter Notebook | notebooks/Post Rank.ipynb | itsnamgyu/reid-metric | 437e02ebad510b482f620a293fd8c7baa4f42ad6 | [
"MIT"
] | null | null | null | notebooks/Post Rank.ipynb | itsnamgyu/reid-metric | 437e02ebad510b482f620a293fd8c7baa4f42ad6 | [
"MIT"
] | null | null | null | notebooks/Post Rank.ipynb | itsnamgyu/reid-metric | 437e02ebad510b482f620a293fd8c7baa4f42ad6 | [
"MIT"
] | null | null | null | 26.485981 | 147 | 0.550929 | [
[
[
"import data\nimport torch\nfrom utils.distmat import *\nfrom utils.evaluation import *\nfrom hitl import *\nimport numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"## Load Data",
"_____no_output_____"
]
],
[
[
"key = data.get_output_keys()[2]\nkey",
"_____no_output_____"
],
[
"output = data.load_output(key)\nqf = torch.Tensor(output[\"qf\"])\ngf = torch.Tensor(output[\"gf\"])\nq_pids = np.array(output[\"q_pids\"])\ng_pids = np.array(output[\"g_pids\"])\nq_camids = np.array(output[\"q_camids\"])\ng_camids = np.array(output[\"g_camids\"])\ndistmat = compute_distmat(qf, gf)",
"_____no_output_____"
]
],
[
[
"### Baseline Results",
"_____no_output_____"
]
],
[
[
"result = evaluate(distmat, q_pids, g_pids, q_camids, g_camids)\nresult",
"_____no_output_____"
]
],
[
[
"### Re-ranked Results",
"_____no_output_____"
]
],
[
[
"all_distmat = compute_inner_distmat(torch.cat((qf, gf)))\nre_distmat = rerank_distmat(all_distmat, qf.shape[0])\nre_result = evaluate(re_distmat, q_pids, g_pids, q_camids, g_camids)\nre_result",
"_____no_output_____"
]
],
[
[
"## One-Shot Evaluation Using Modules",
"_____no_output_____"
],
[
"### Rocchio",
"_____no_output_____"
]
],
[
[
"ntot = torch.as_tensor\nrocchio.run(qf, gf, ntot(q_pids), ntot(g_pids), ntot(q_camids), ntot(g_camids), t=3)",
"_____no_output_____"
]
],
[
[
"### Neighborhood Expansion (Min)",
"_____no_output_____"
]
],
[
[
"ntot = torch.from_numpy\nne.run(qf, gf, ntot(q_pids), ntot(g_pids), ntot(q_camids), ntot(g_camids), t=3, method=\"min\")",
"_____no_output_____"
]
],
[
[
"### Neighborhood Expansion (Mean)",
"_____no_output_____"
]
],
[
[
"ntot = torch.from_numpy\nne.run(qf, gf, ntot(q_pids), ntot(g_pids), ntot(q_camids), ntot(g_camids), t=3, method=\"mean\")",
"_____no_output_____"
]
],
[
[
"## Development",
"_____no_output_____"
],
[
"## Feedback Models",
"_____no_output_____"
]
],
[
[
"q_pids = torch.tensor(output[\"q_pids\"])\ng_pids = torch.tensor(output[\"g_pids\"])\nq = len(q_pids)\ng = len(g_pids)\nm = qf.shape[1]\nq_camids = np.array(output[\"q_camids\"])\ng_camids = np.array(output[\"g_camids\"])",
"_____no_output_____"
]
],
[
[
"### Naive Feedback",
"_____no_output_____"
]
],
[
[
"if input(\"reset? \") == \"y\":\n positive_indices = torch.zeros((q, g), dtype=bool)\n negative_indices = torch.zeros((q, g), dtype=bool)",
"_____no_output_____"
],
[
"for i in tqdm(range(5)):\n qf_adjusted = qf # no adjust, naive re-rank\n\n distmat = compute_distmat(qf_adjusted, gf)\n distmat[positive_indices] = float(\"inf\")\n distmat[negative_indices] = float(\"inf\")\n\n # Select feedback (top-1 from remaining gallery instances)\n distances, indices = distmat.min(dim=1)\n assert(tuple(distances.shape) == (q,))\n assert(tuple(indices.shape) == (q,))\n\n pmap = g_pids[indices] == q_pids\n positive_q = torch.arange(0, q)[pmap]\n negative_q = torch.arange(0, q)[pmap == False]\n positive_g = indices[pmap]\n negative_g = indices[pmap== False]\n\n existing = positive_indices[positive_q, positive_g]\n assert(not existing.any())\n positive_indices[positive_q, positive_g] = True\n existing = negative_indices[negative_q, negative_g]\n assert(not existing.any())\n negative_indices[negative_q, negative_g] = True",
"_____no_output_____"
],
[
"distmat = compute_distmat(qf_adjusted, gf)\ndistmat[positive_indices] = 0\ndistmat[negative_indices] = float(\"inf\")\nnaive_new_result = evaluate(distmat.numpy(), q_pids, g_pids, q_camids, g_camids)\nnaive_new_result",
"_____no_output_____"
]
],
[
[
"### Rocchio",
"_____no_output_____"
]
],
[
[
"if input(\"reset? \") == \"y\":\n positive_indices = torch.zeros((q, g), dtype=bool)\n negative_indices = torch.zeros((q, g), dtype=bool)",
"_____no_output_____"
],
[
"alpha = 1\nbeta = 0.65\ngamma = 0.35\nqf_adjusted = qf\n\nfor i in tqdm(range(5)):\n distmat = compute_distmat(qf_adjusted, gf)\n distmat[positive_indices] = float(\"inf\")\n distmat[negative_indices] = float(\"inf\")\n\n # Select feedback (top-1 from remaining gallery instances)\n distances, indices = distmat.min(dim=1)\n assert(tuple(distances.shape) == (q,))\n assert(tuple(indices.shape) == (q,))\n\n # Apply feedback\n pmap = g_pids[indices] == q_pids\n positive_q = torch.arange(0, q)[pmap]\n negative_q = torch.arange(0, q)[pmap == False]\n positive_g = indices[pmap]\n negative_g = indices[pmap== False]\n\n existing = positive_indices[positive_q, positive_g]\n assert(not existing.any())\n positive_indices[positive_q, positive_g] = True\n existing = negative_indices[negative_q, negative_g]\n assert(not existing.any())\n negative_indices[negative_q, negative_g] = True\n \n # Compute new query\n mean_positive_gf = positive_indices.float().mm(gf) / positive_indices.float().sum(dim=1, keepdim=True)\n mean_negative_gf = negative_indices.float().mm(gf) / negative_indices.float().sum(dim=1, keepdim=True)\n mean_positive_gf[mean_positive_gf.isnan()] = 0\n mean_negative_gf[mean_negative_gf.isnan()] = 0\n qf_adjusted = qf * alpha + mean_positive_gf * beta - mean_negative_gf * gamma",
"_____no_output_____"
],
[
"distmat = compute_distmat(qf_adjusted, gf)\ndistmat[positive_indices] = 0\ndistmat[negative_indices] = float(\"inf\")\nnew_result = evaluate(distmat.numpy(), q_pids, g_pids, q_camids, g_camids)\nnew_result",
"_____no_output_____"
]
],
[
[
"### Function Tests (Rocchio)",
"_____no_output_____"
]
],
[
[
"def adjust_qf(qf, gf, positive_indices, negative_indices, alpha=1, beta=0.65, gamma=0.35):\n assert(qf.shape[1] == gf.shape[1])\n mean_positive_gf = positive_indices.float().mm(gf) / positive_indices.float().sum(dim=1, keepdim=True)\n mean_negative_gf = negative_indices.float().mm(gf) / negative_indices.float().sum(dim=1, keepdim=True)\n mean_positive_gf[mean_positive_gf.isnan()] = 0\n mean_negative_gf[mean_negative_gf.isnan()] = 0\n qf_adjusted = qf * alpha + mean_positive_gf * beta - mean_negative_gf * gamma\n return qf_adjusted",
"_____no_output_____"
],
[
"def update_feedback_indices(distmat, q_pids, g_pids, positive_indices, negative_indices, inplace=True):\n \"\"\"\n Note that distmat is corrupted if inplace=True.\n \n distmat: q x g Tensor (adjusted query to gallery)\n q_pids: q\n g_pids: g\n positive_indices: q x g\n negative_indices: q x g\n \n :Returns:\n positive_indices, negative_indices\n \"\"\"\n q, g = tuple(distmat.shape)\n \n if not inplace:\n distmat = distmat.clone().detach()\n positive_indices = positive_indices.copy()\n negative_indices = negative_indices.copy()\n \n distmat[positive_indices] = float(\"inf\")\n distmat[negative_indices] = float(\"inf\")\n \n indices = distmat.argmin(dim=1)\n pmap = g_pids[indices] == q_pids\n positive_q = torch.arange(0, q)[pmap]\n negative_q = torch.arange(0, q)[pmap == False]\n positive_g = indices[pmap]\n negative_g = indices[pmap== False]\n\n existing = positive_indices[positive_q, positive_g]\n assert(not existing.any())\n positive_indices[positive_q, positive_g] = True\n existing = negative_indices[negative_q, negative_g]\n assert(not existing.any())\n negative_indices[negative_q, negative_g] = True\n \n return positive_indices, negative_indices",
"_____no_output_____"
],
[
"def init_feedback_indices(q, g):\n return torch.zeros((q, g), dtype=bool)",
"_____no_output_____"
],
[
"def update_distmat(qf, gf, q_pids, g_pids, positive_indices=None, negative_indices=None,\n inplace=True, previous_distmat=None, alpha=1, beta=0.65, gamma=0.35):\n \"\"\"\n previous_distmat: adjusted distmat (!= compute_distmat(qf, gf))\n \"\"\"\n q, g = qf.shape[0], gf.shape[0]\n assert(qf.shape[1] == gf.shape[1])\n \n if positive_indices is None:\n positive_indices = init_feedback_indices(q, g)\n if negative_indices is None:\n negative_indices = init_feedback_indices(q, g)\n\n distmat = previous_distmat \n if distmat is None:\n qf_adjusted = adjust_qf(qf, gf, positive_indices, negative_indices)\n distmat = compute_distmat(qf_adjusted, gf)\n\n positive_indices, negative_indices = update_feedback_indices(\n distmat, q_pids, g_pids, positive_indices, negative_indices, inplace=inplace)\n \n qf_adjusted = adjust_qf(qf, gf, positive_indices, negative_indices, alpha=alpha, beta=beta, gamma=gamma)\n distmat = compute_distmat(qf_adjusted, gf)\n \n return distmat, positive_indices, negative_indices",
"_____no_output_____"
],
[
"positive_indices = None\nnegative_indices = None\ndistmat = None\nfor i in tqdm(range(5)):\n distmat, positive_indices, negative_indices = update_distmat(\n qf, gf, q_pids, g_pids, positive_indices, negative_indices, previous_distmat=distmat)\n ",
"_____no_output_____"
],
[
"distmat[positive_indices] = 0\ndistmat[negative_indices] = float(\"inf\")\nnew_result = evaluate(distmat, q_pids, g_pids, q_camids, g_camids)\nnew_result",
"_____no_output_____"
]
],
[
[
"### Module Test (Naive)",
"_____no_output_____"
]
],
[
[
"positive_indices = None\nnegative_indices = None\ndistmat = None\nfor i in tqdm(range(3)):\n distmat, positive_indices, negative_indices = feedback.naive_round(\n qf, gf, q_pids, g_pids, positive_indices, negative_indices, previous_distmat=distmat)\nnaive_result = evaluate(distmat, q_pids, g_pids, q_camids, g_camids)\nnaive_result",
"_____no_output_____"
]
],
[
[
"### Module Test (Rocchio)",
"_____no_output_____"
]
],
[
[
"positive_indices = None\nnegative_indices = None\ndistmat = None\nfor i in tqdm(range(3)):\n distmat, positive_indices, negative_indices = rocchio.rocchio_round(\n qf, gf, q_pids, g_pids, positive_indices, negative_indices, previous_distmat=distmat)\nrocchio_result = evaluate(distmat, q_pids, g_pids, q_camids, g_camids)\nrocchio_result",
"_____no_output_____"
]
],
[
[
"### Single Feedback Rocchio (Old)\nInitial implementation test",
"_____no_output_____"
]
],
[
[
"g_pids = torch.Tensor(output[\"g_pids\"])\nq_pids = torch.Tensor(output[\"q_pids\"])\nmatch = g_pids[min_indices] == q_pids",
"_____no_output_____"
],
[
"selected_gf = gf[min_indices]\nselected_gf.shape",
"_____no_output_____"
],
[
"weights = match.float() * (beta + gamma) - gamma",
"_____no_output_____"
],
[
"weighted_feedback = selected_gf * weights.reshape(-1, 1)",
"_____no_output_____"
],
[
"weighted_feedback",
"_____no_output_____"
],
[
"inverse_weights = 1 - weights",
"_____no_output_____"
],
[
"new_qf = qf * inverse_weights.reshape(-1, 1) + weighted_feedback",
"_____no_output_____"
],
[
"new_distmat = compute_distmat(new_qf, gf)",
"_____no_output_____"
],
[
"new_result = evaluate(new_distmat.numpy(), q_pids, g_pids, np.array(output[\"q_camids\"]), np.array(output[\"g_camids\"]), test_ratio=0.1)\nnew_result",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0db35c972ceda4a7bb35fc002176b53ba3043c7 | 8,237 | ipynb | Jupyter Notebook | docs/source/user_guide/clean/clean_de_wkn.ipynb | NoirTree/dataprep | 4744134886017ce7381fa7ae7c201772e9bafc12 | [
"MIT"
] | 1,229 | 2019-12-21T02:58:59.000Z | 2022-03-30T08:12:33.000Z | docs/source/user_guide/clean/clean_de_wkn.ipynb | NoirTree/dataprep | 4744134886017ce7381fa7ae7c201772e9bafc12 | [
"MIT"
] | 680 | 2019-12-19T06:09:23.000Z | 2022-03-31T04:15:25.000Z | docs/source/user_guide/clean/clean_de_wkn.ipynb | NoirTree/dataprep | 4744134886017ce7381fa7ae7c201772e9bafc12 | [
"MIT"
] | 170 | 2020-01-08T03:27:26.000Z | 2022-03-20T20:42:55.000Z | 24.369822 | 448 | 0.535632 | [
[
[
".. _wkn_userguide:\n\nWKN Strings\n============",
"_____no_output_____"
],
[
"Introduction\n------------\n\nThe function :func:`clean_de_wkn() <dataprep.clean.clean_de_wkn.clean_de_wkn>` cleans a column containing German Securities Identification Codes (WKN) strings, and standardizes them in a given format. The function :func:`validate_de_wkn() <dataprep.clean.clean_de_wkn.validate_de_wkn>` validates either a single WKN strings, a column of WKN strings or a DataFrame of WKN strings, returning `True` if the value is valid, and `False` otherwise.",
"_____no_output_____"
]
],
[
[
"WKN strings can be converted to the following formats via the `output_format` parameter:\n\n* `compact`: only number strings without any seperators or whitespace, like \"A0MNRK\"\n* `standard`: WKN strings with proper whitespace in the proper places. Note that in the case of WKN, the compact format is the same as the standard one.\n* `isin`: convert the number to an ISIN, like \"DE000A0MNRK9\".\n\nInvalid parsing is handled with the `errors` parameter:\n\n* `coerce` (default): invalid parsing will be set to NaN\n* `ignore`: invalid parsing will return the input\n* `raise`: invalid parsing will raise an exception\n\nThe following sections demonstrate the functionality of `clean_de_wkn()` and `validate_de_wkn()`. ",
"_____no_output_____"
],
[
"### An example dataset containing WKN strings",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\ndf = pd.DataFrame(\n {\n \"wkn\": [\n 'A0MNRK',\n 'AOMNRK',\n '7542011030',\n '7552A10004',\n '8019010008',\n \"hello\",\n np.nan,\n \"NULL\",\n ], \n \"address\": [\n \"123 Pine Ave.\",\n \"main st\",\n \"1234 west main heights 57033\",\n \"apt 1 789 s maple rd manhattan\",\n \"robie house, 789 north main street\",\n \"1111 S Figueroa St, Los Angeles, CA 90015\",\n \"(staples center) 1111 S Figueroa St, Los Angeles\",\n \"hello\",\n ]\n }\n)\ndf",
"_____no_output_____"
]
],
[
[
"## 1. Default `clean_de_wkn`\n\nBy default, `clean_de_wkn` will clean wkn strings and output them in the standard format with proper separators.",
"_____no_output_____"
]
],
[
[
"from dataprep.clean import clean_de_wkn\nclean_de_wkn(df, column = \"wkn\")",
"_____no_output_____"
]
],
[
[
"## 2. Output formats",
"_____no_output_____"
],
[
"This section demonstrates the output parameter.",
"_____no_output_____"
],
[
"### `standard` (default)",
"_____no_output_____"
]
],
[
[
"clean_de_wkn(df, column = \"wkn\", output_format=\"standard\")",
"_____no_output_____"
]
],
[
[
"### `compact`",
"_____no_output_____"
]
],
[
[
"clean_de_wkn(df, column = \"wkn\", output_format=\"compact\")",
"_____no_output_____"
]
],
[
[
"### `isin`",
"_____no_output_____"
]
],
[
[
"clean_de_wkn(df, column = \"wkn\", output_format=\"isin\")",
"_____no_output_____"
]
],
[
[
"## 3. `inplace` parameter\n\nThis deletes the given column from the returned DataFrame. \nA new column containing cleaned WKN strings is added with a title in the format `\"{original title}_clean\"`.",
"_____no_output_____"
]
],
[
[
"clean_de_wkn(df, column=\"wkn\", inplace=True)",
"_____no_output_____"
]
],
[
[
"## 4. `errors` parameter",
"_____no_output_____"
],
[
"### `coerce` (default)",
"_____no_output_____"
]
],
[
[
"clean_de_wkn(df, \"wkn\", errors=\"coerce\")",
"_____no_output_____"
]
],
[
[
"### `ignore`",
"_____no_output_____"
]
],
[
[
"clean_de_wkn(df, \"wkn\", errors=\"ignore\")",
"_____no_output_____"
]
],
[
[
"## 4. `validate_de_wkn()`",
"_____no_output_____"
],
[
"`validate_de_wkn()` returns `True` when the input is a valid WKN. Otherwise it returns `False`.\n\nThe input of `validate_de_wkn()` can be a string, a Pandas DataSeries, a Dask DataSeries, a Pandas DataFrame and a dask DataFrame.\n\nWhen the input is a string, a Pandas DataSeries or a Dask DataSeries, user doesn't need to specify a column name to be validated. \n\nWhen the input is a Pandas DataFrame or a dask DataFrame, user can both specify or not specify a column name to be validated. If user specify the column name, `validate_de_wkn()` only returns the validation result for the specified column. If user doesn't specify the column name, `validate_de_wkn()` returns the validation result for the whole DataFrame.",
"_____no_output_____"
]
],
[
[
"from dataprep.clean import validate_de_wkn\nprint(validate_de_wkn('A0MNRK'))\nprint(validate_de_wkn('AOMNRK'))\nprint(validate_de_wkn('7542011030'))\nprint(validate_de_wkn('7552A10004'))\nprint(validate_de_wkn('8019010008'))\nprint(validate_de_wkn(\"hello\"))\nprint(validate_de_wkn(np.nan))\nprint(validate_de_wkn(\"NULL\"))",
"_____no_output_____"
]
],
[
[
"### Series",
"_____no_output_____"
]
],
[
[
"validate_de_wkn(df[\"wkn\"])",
"_____no_output_____"
]
],
[
[
"### DataFrame + Specify Column",
"_____no_output_____"
]
],
[
[
"validate_de_wkn(df, column=\"wkn\")",
"_____no_output_____"
]
],
[
[
"### Only DataFrame",
"_____no_output_____"
]
],
[
[
"validate_de_wkn(df)",
"_____no_output_____"
]
]
] | [
"raw",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"raw",
"raw"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0db44f5448db4e11df83a48e53ab1e7a64da16e | 650,760 | ipynb | Jupyter Notebook | Pix2PixHD_Example.ipynb | NikitaStasenko/RGB2NIR_Experimental | 8c951fe64581112d2ae3207a21a4d108a64179b6 | [
"BSD-3-Clause"
] | 1 | 2021-11-25T10:55:39.000Z | 2021-11-25T10:55:39.000Z | Pix2PixHD_Example.ipynb | NikitaStasenko/RGB2NIR_Experimental | 8c951fe64581112d2ae3207a21a4d108a64179b6 | [
"BSD-3-Clause"
] | null | null | null | Pix2PixHD_Example.ipynb | NikitaStasenko/RGB2NIR_Experimental | 8c951fe64581112d2ae3207a21a4d108a64179b6 | [
"BSD-3-Clause"
] | null | null | null | 237.937843 | 138,032 | 0.854756 | [
[
[
"from google.colab import drive\ndrive.mount('/content/gdrive')",
"Mounted at /content/gdrive\n"
],
[
"!git clone https://github.com/NVIDIA/pix2pixHD.git",
"Cloning into 'pix2pixHD'...\nremote: Enumerating objects: 340, done.\u001b[K\nremote: Total 340 (delta 0), reused 0 (delta 0), pack-reused 340\u001b[K\nReceiving objects: 100% (340/340), 55.68 MiB | 33.66 MiB/s, done.\nResolving deltas: 100% (156/156), done.\n"
],
[
"import os\nos.chdir('pix2pixHD/')",
"_____no_output_____"
],
[
"# !chmod 755 /content/gdrive/My\\ Drive/Images_for_GAN/datasets/download_convert_apples_dataset.sh\n# !/content/gdrive/My\\ Drive/Images_for_GAN/datasets/download_convert_apples_dataset.sh",
"_____no_output_____"
],
[
"!ls",
"_config.yml\t imgs\t precompute_feature_maps.py test.py\ndata\t\t LICENSE.txt README.md\t\t train.py\ndatasets\t models\t run_engine.py\t\t util\nencode_features.py options\t scripts\n"
],
[
"!pip install dominate",
"Collecting dominate\n Downloading dominate-2.6.0-py2.py3-none-any.whl (29 kB)\nInstalling collected packages: dominate\nSuccessfully installed dominate-2.6.0\n"
],
[
"import numpy as np\nimport scipy\nimport matplotlib\nimport pandas as pd\nimport cv2\nimport matplotlib.pyplot as plt\n# import pydmd \n#from pydmd import DMD\n\n%matplotlib inline\nimport scipy.integrate\nfrom matplotlib import animation\nfrom IPython.display import HTML\n\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 8, 5\n\nfrom PIL import Image\nfrom skimage import io",
"_____no_output_____"
],
[
"# Example of RGB image from A\napples_example1 = cv2.imread('/content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/A/20_12_26_22_15_00_Canon_top_all_on.jpg')\napples_example1 = cv2.cvtColor(apples_example1, cv2.COLOR_BGR2RGB)\nplt.imshow(apples_example1)\nplt.show()\n\nprint(type(apples_example1))\nprint(\"- Number of Pixels: \" + str(apples_example1.size))\nprint(\"- Shape/Dimensions: \" + str(apples_example1.shape))",
"_____no_output_____"
],
[
"# Example of RGB image from B\napples_example2 = cv2.imread('/content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/B/set10_20201226_221732_686_00000_channel7.png')\napples_example2 = cv2.cvtColor(apples_example2, cv2.COLOR_BGR2RGB)\nplt.imshow(apples_example2)\nplt.show()\n\nprint(type(apples_example2))\nprint(\"- Number of Pixels: \" + str(apples_example2.size))\nprint(\"- Shape/Dimensions: \" + str(apples_example2.shape))",
"_____no_output_____"
],
[
"# Example of RGB image from ./train_A/\napples_example3 = cv2.imread('/content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/train/train_A/20210111_171500.png')\napples_example3 = cv2.cvtColor(apples_example3, cv2.COLOR_BGR2RGB)\nplt.imshow(apples_example3)\nplt.show()\n\nprint(type(apples_example3))\nprint(\"- Number of Pixels: \" + str(apples_example3.size))\nprint(\"- Shape/Dimensions: \" + str(apples_example3.shape))",
"_____no_output_____"
],
[
"# Example of RGB image from ./train_B/\napples_example4 = cv2.imread('/content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/train/train_B/20210111_134500.png')\napples_example4 = cv2.cvtColor(apples_example4, cv2.COLOR_BGR2RGB)\nplt.imshow(apples_example4)\nplt.show()\n\nprint(type(apples_example4))\nprint(\"- Number of Pixels: \" + str(apples_example4.size))\nprint(\"- Shape/Dimensions: \" + str(apples_example4.shape))",
"_____no_output_____"
],
[
"#!python train.py --loadSize 512 --fineSize 512 --label_nc 0 --no_instance --name apples_RGB_NIR --dataroot /content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/train --checkpoints_dir /content/gdrive/MyDrive/Images_for_GAN/checkpoints --model Pix2PixHD --save_epoch_freq 5 ",
"------------ Options -------------\nbatchSize: 1\nbeta1: 0.5\ncheckpoints_dir: /content/gdrive/MyDrive/Images_for_GAN/checkpoints\ncontinue_train: False\ndata_type: 32\ndataroot: /content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/train\ndebug: False\ndisplay_freq: 100\ndisplay_winsize: 512\nfeat_num: 3\nfineSize: 512\nfp16: False\ngpu_ids: [0]\ninput_nc: 3\ninstance_feat: False\nisTrain: True\nlabel_feat: False\nlabel_nc: 0\nlambda_feat: 10.0\nloadSize: 512\nload_features: False\nload_pretrain: \nlocal_rank: 0\nlr: 0.0002\nmax_dataset_size: inf\nmodel: Pix2PixHD\nnThreads: 2\nn_blocks_global: 9\nn_blocks_local: 3\nn_clusters: 10\nn_downsample_E: 4\nn_downsample_global: 4\nn_layers_D: 3\nn_local_enhancers: 1\nname: apples_RGB_NIR\nndf: 64\nnef: 16\nnetG: global\nngf: 64\nniter: 100\nniter_decay: 100\nniter_fix_global: 0\nno_flip: False\nno_ganFeat_loss: False\nno_html: False\nno_instance: True\nno_lsgan: False\nno_vgg_loss: False\nnorm: instance\nnum_D: 2\noutput_nc: 3\nphase: train\npool_size: 0\nprint_freq: 100\nresize_or_crop: scale_width\nsave_epoch_freq: 5\nsave_latest_freq: 1000\nserial_batches: False\ntf_log: False\nuse_dropout: False\nverbose: False\nwhich_epoch: latest\n-------------- End ----------------\ntrain.py:9: DeprecationWarning: fractions.gcd() is deprecated. Use math.gcd() instead.\n def lcm(a,b): return abs(a * b)/fractions.gcd(a,b) if a and b else 0\nCustomDatasetDataLoader\ndataset [AlignedDataset] was created\n#training images = 57\nTraceback (most recent call last):\n File \"train.py\", line 41, in <module>\n model = create_model(opt)\n File \"/content/pix2pixHD/pix2pixHD/pix2pixHD/models/models.py\", line 13, in create_model\n model.initialize(opt)\n File \"/content/pix2pixHD/pix2pixHD/pix2pixHD/models/ui_model.py\", line 16, in initialize\n assert(not opt.isTrain)\nAssertionError\n"
],
[
"path_train_A = '/content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/train/train_A/'\nprint('path_train_A: ', path_train_A)\nprint('Number of images in path_train_A:', len(path_train_A))\n\npath_train_B = '/content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/train/train_B/'\nprint('path_train_B: ', path_train_B)\nprint('Number of images in path_train_B:', len(path_train_B))",
"path_train_A: /content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/train/train_A/\nNumber of images in path_train_A: 68\npath_train_B: /content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/train/train_B/\nNumber of images in path_train_B: 68\n"
],
[
"# !python train.py --name apples_trash --dataroot /content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/Trash --checkpoints_dir /content/gdrive/MyDrive/Images_for_GAN/checkpoints --norm batch --loadSize 512 --fineSize 512 --label_nc 0 --no_instance\n!python train.py --name apples_trash_1 --dataroot /content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/train --label_nc 0 --no_instance --loadSize 320 --fineSize 160 --resize_or_crop resize_and_crop",
"------------ Options -------------\nbatchSize: 1\nbeta1: 0.5\ncheckpoints_dir: ./checkpoints\ncontinue_train: False\ndata_type: 32\ndataroot: /content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/train\ndebug: False\ndisplay_freq: 100\ndisplay_winsize: 512\nfeat_num: 3\nfineSize: 160\nfp16: False\ngpu_ids: [0]\ninput_nc: 3\ninstance_feat: False\nisTrain: True\nlabel_feat: False\nlabel_nc: 0\nlambda_feat: 10.0\nloadSize: 320\nload_features: False\nload_pretrain: \nlocal_rank: 0\nlr: 0.0002\nmax_dataset_size: inf\nmodel: pix2pixHD\nnThreads: 2\nn_blocks_global: 9\nn_blocks_local: 3\nn_clusters: 10\nn_downsample_E: 4\nn_downsample_global: 4\nn_layers_D: 3\nn_local_enhancers: 1\nname: apples_trash_1\nndf: 64\nnef: 16\nnetG: global\nngf: 64\nniter: 100\nniter_decay: 100\nniter_fix_global: 0\nno_flip: False\nno_ganFeat_loss: False\nno_html: False\nno_instance: True\nno_lsgan: False\nno_vgg_loss: False\nnorm: instance\nnum_D: 2\noutput_nc: 3\nphase: train\npool_size: 0\nprint_freq: 100\nresize_or_crop: resize_and_crop\nsave_epoch_freq: 10\nsave_latest_freq: 1000\nserial_batches: False\ntf_log: False\nuse_dropout: False\nverbose: False\nwhich_epoch: latest\n-------------- End ----------------\ntrain.py:9: DeprecationWarning: fractions.gcd() is deprecated. Use math.gcd() instead.\n def lcm(a,b): return abs(a * b)/fractions.gcd(a,b) if a and b else 0\nCustomDatasetDataLoader\ndataset [AlignedDataset] was created\n#training images = 57\nGlobalGenerator(\n (model): Sequential(\n (0): ReflectionPad2d((3, 3, 3, 3))\n (1): Conv2d(3, 64, kernel_size=(7, 7), stride=(1, 1))\n (2): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (3): ReLU(inplace=True)\n (4): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))\n (5): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (6): ReLU(inplace=True)\n (7): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))\n (8): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (9): ReLU(inplace=True)\n (10): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))\n (11): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (12): ReLU(inplace=True)\n (13): Conv2d(512, 1024, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))\n (14): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (15): ReLU(inplace=True)\n (16): ResnetBlock(\n (conv_block): Sequential(\n (0): ReflectionPad2d((1, 1, 1, 1))\n (1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))\n (2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (3): ReLU(inplace=True)\n (4): ReflectionPad2d((1, 1, 1, 1))\n (5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))\n (6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n )\n )\n (17): ResnetBlock(\n (conv_block): Sequential(\n (0): ReflectionPad2d((1, 1, 1, 1))\n (1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))\n (2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (3): ReLU(inplace=True)\n (4): ReflectionPad2d((1, 1, 1, 1))\n (5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))\n (6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n )\n )\n (18): ResnetBlock(\n (conv_block): Sequential(\n (0): ReflectionPad2d((1, 1, 1, 1))\n (1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))\n (2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (3): ReLU(inplace=True)\n (4): ReflectionPad2d((1, 1, 1, 1))\n (5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))\n (6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n )\n )\n (19): ResnetBlock(\n (conv_block): Sequential(\n (0): ReflectionPad2d((1, 1, 1, 1))\n (1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))\n (2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (3): ReLU(inplace=True)\n (4): ReflectionPad2d((1, 1, 1, 1))\n (5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))\n (6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n )\n )\n (20): ResnetBlock(\n (conv_block): Sequential(\n (0): ReflectionPad2d((1, 1, 1, 1))\n (1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))\n (2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (3): ReLU(inplace=True)\n (4): ReflectionPad2d((1, 1, 1, 1))\n (5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))\n (6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n )\n )\n (21): ResnetBlock(\n (conv_block): Sequential(\n (0): ReflectionPad2d((1, 1, 1, 1))\n (1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))\n (2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (3): ReLU(inplace=True)\n (4): ReflectionPad2d((1, 1, 1, 1))\n (5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))\n (6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n )\n )\n (22): ResnetBlock(\n (conv_block): Sequential(\n (0): ReflectionPad2d((1, 1, 1, 1))\n (1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))\n (2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (3): ReLU(inplace=True)\n (4): ReflectionPad2d((1, 1, 1, 1))\n (5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))\n (6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n )\n )\n (23): ResnetBlock(\n (conv_block): Sequential(\n (0): ReflectionPad2d((1, 1, 1, 1))\n (1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))\n (2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (3): ReLU(inplace=True)\n (4): ReflectionPad2d((1, 1, 1, 1))\n (5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))\n (6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n )\n )\n (24): ResnetBlock(\n (conv_block): Sequential(\n (0): ReflectionPad2d((1, 1, 1, 1))\n (1): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))\n (2): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (3): ReLU(inplace=True)\n (4): ReflectionPad2d((1, 1, 1, 1))\n (5): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1))\n (6): InstanceNorm2d(1024, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n )\n )\n (25): ConvTranspose2d(1024, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1))\n (26): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (27): ReLU(inplace=True)\n (28): ConvTranspose2d(512, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1))\n (29): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (30): ReLU(inplace=True)\n (31): ConvTranspose2d(256, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1))\n (32): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (33): ReLU(inplace=True)\n (34): ConvTranspose2d(128, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1))\n (35): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (36): ReLU(inplace=True)\n (37): ReflectionPad2d((3, 3, 3, 3))\n (38): Conv2d(64, 3, kernel_size=(7, 7), stride=(1, 1))\n (39): Tanh()\n )\n)\nMultiscaleDiscriminator(\n (scale0_layer0): Sequential(\n (0): Conv2d(6, 64, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))\n (1): LeakyReLU(negative_slope=0.2, inplace=True)\n )\n (scale0_layer1): Sequential(\n (0): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))\n (1): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (2): LeakyReLU(negative_slope=0.2, inplace=True)\n )\n (scale0_layer2): Sequential(\n (0): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))\n (1): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (2): LeakyReLU(negative_slope=0.2, inplace=True)\n )\n (scale0_layer3): Sequential(\n (0): Conv2d(256, 512, kernel_size=(4, 4), stride=(1, 1), padding=(2, 2))\n (1): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (2): LeakyReLU(negative_slope=0.2, inplace=True)\n )\n (scale0_layer4): Sequential(\n (0): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), padding=(2, 2))\n )\n (scale1_layer0): Sequential(\n (0): Conv2d(6, 64, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))\n (1): LeakyReLU(negative_slope=0.2, inplace=True)\n )\n (scale1_layer1): Sequential(\n (0): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))\n (1): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (2): LeakyReLU(negative_slope=0.2, inplace=True)\n )\n (scale1_layer2): Sequential(\n (0): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(2, 2))\n (1): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (2): LeakyReLU(negative_slope=0.2, inplace=True)\n )\n (scale1_layer3): Sequential(\n (0): Conv2d(256, 512, kernel_size=(4, 4), stride=(1, 1), padding=(2, 2))\n (1): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False)\n (2): LeakyReLU(negative_slope=0.2, inplace=True)\n )\n (scale1_layer4): Sequential(\n (0): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), padding=(2, 2))\n )\n (downsample): AvgPool2d(kernel_size=3, stride=2, padding=[1, 1])\n)\nDownloading: \"https://download.pytorch.org/models/vgg19-dcbb9e9d.pth\" to /root/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth\n100% 548M/548M [00:08<00:00, 71.2MB/s]\ncreate web directory ./checkpoints/apples_trash_1/web...\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 1 / 200 \t Time Taken: 24 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 2, iters: 43, time: 0.389) G_GAN: 1.181 G_GAN_Feat: 7.346 G_VGG: 5.404 D_real: 0.424 D_fake: 0.345 \nEnd of epoch 2 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 3 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 4, iters: 29, time: 0.357) G_GAN: 0.481 G_GAN_Feat: 0.928 G_VGG: 2.099 D_real: 0.440 D_fake: 0.675 \nEnd of epoch 4 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 5 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 6, iters: 15, time: 0.358) G_GAN: 0.852 G_GAN_Feat: 7.522 G_VGG: 4.533 D_real: 0.138 D_fake: 0.349 \nEnd of epoch 6 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 7 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 8, iters: 1, time: 0.359) G_GAN: 1.580 G_GAN_Feat: 6.136 G_VGG: 5.423 D_real: 0.379 D_fake: 0.094 \nEnd of epoch 8 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 9, iters: 44, time: 0.359) G_GAN: 0.385 G_GAN_Feat: 7.762 G_VGG: 4.619 D_real: 0.121 D_fake: 0.688 \nEnd of epoch 9 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 10 / 200 \t Time Taken: 20 sec\nsaving the model at the end of epoch 10, iters 570\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 11, iters: 30, time: 0.424) G_GAN: 0.686 G_GAN_Feat: 7.081 G_VGG: 7.020 D_real: 0.110 D_fake: 0.478 \nEnd of epoch 11 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 12 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 13, iters: 16, time: 0.359) G_GAN: 1.162 G_GAN_Feat: 2.911 G_VGG: 2.893 D_real: 1.506 D_fake: 0.149 \nEnd of epoch 13 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 14 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 15, iters: 2, time: 0.359) G_GAN: 0.464 G_GAN_Feat: 1.079 G_VGG: 3.190 D_real: 0.376 D_fake: 0.584 \nEnd of epoch 15 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 16, iters: 45, time: 0.359) G_GAN: 0.471 G_GAN_Feat: 1.393 G_VGG: 3.524 D_real: 0.391 D_fake: 0.558 \nEnd of epoch 16 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 17 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 18, iters: 31, time: 0.359) G_GAN: 1.705 G_GAN_Feat: 1.327 G_VGG: 2.272 D_real: 1.615 D_fake: 0.039 \nsaving the latest model (epoch 18, total_steps 1000)\nEnd of epoch 18 / 200 \t Time Taken: 25 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 19 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 20, iters: 17, time: 0.361) G_GAN: 0.776 G_GAN_Feat: 5.052 G_VGG: 6.557 D_real: 0.127 D_fake: 0.382 \nEnd of epoch 20 / 200 \t Time Taken: 20 sec\nsaving the model at the end of epoch 20, iters 1140\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 21 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 22, iters: 3, time: 0.429) G_GAN: 1.469 G_GAN_Feat: 0.749 G_VGG: 2.137 D_real: 1.398 D_fake: 0.069 \nEnd of epoch 22 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 23, iters: 46, time: 0.358) G_GAN: 0.847 G_GAN_Feat: 8.087 G_VGG: 5.667 D_real: 0.068 D_fake: 0.272 \nEnd of epoch 23 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 24 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 25, iters: 32, time: 0.359) G_GAN: 1.294 G_GAN_Feat: 1.084 G_VGG: 1.590 D_real: 1.196 D_fake: 0.115 \nEnd of epoch 25 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 26 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 27, iters: 18, time: 0.359) G_GAN: 0.554 G_GAN_Feat: 8.562 G_VGG: 6.677 D_real: 0.042 D_fake: 0.483 \nEnd of epoch 27 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 28 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 29, iters: 4, time: 0.360) G_GAN: 0.994 G_GAN_Feat: 1.286 G_VGG: 1.978 D_real: 0.956 D_fake: 0.253 \nEnd of epoch 29 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 30, iters: 47, time: 0.359) G_GAN: 0.460 G_GAN_Feat: 1.087 G_VGG: 1.067 D_real: 0.456 D_fake: 0.579 \nEnd of epoch 30 / 200 \t Time Taken: 20 sec\nsaving the model at the end of epoch 30, iters 1710\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 31 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 32, iters: 33, time: 0.425) G_GAN: 0.443 G_GAN_Feat: 1.089 G_VGG: 2.175 D_real: 0.391 D_fake: 0.614 \nEnd of epoch 32 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 33 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 34, iters: 19, time: 0.359) G_GAN: 1.386 G_GAN_Feat: 7.652 G_VGG: 8.029 D_real: 0.118 D_fake: 0.103 \nEnd of epoch 34 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 35 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 36, iters: 5, time: 0.359) G_GAN: 1.201 G_GAN_Feat: 1.030 G_VGG: 1.772 D_real: 1.193 D_fake: 0.127 \nsaving the latest model (epoch 36, total_steps 2000)\nEnd of epoch 36 / 200 \t Time Taken: 25 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 37, iters: 48, time: 0.361) G_GAN: 0.452 G_GAN_Feat: 0.733 G_VGG: 0.501 D_real: 0.489 D_fake: 0.570 \nEnd of epoch 37 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 38 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 39, iters: 34, time: 0.359) G_GAN: 0.635 G_GAN_Feat: 7.394 G_VGG: 5.021 D_real: 0.068 D_fake: 0.441 \nEnd of epoch 39 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 40 / 200 \t Time Taken: 20 sec\nsaving the model at the end of epoch 40, iters 2280\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 41, iters: 20, time: 0.427) G_GAN: 1.188 G_GAN_Feat: 5.528 G_VGG: 4.033 D_real: 0.159 D_fake: 0.180 \nEnd of epoch 41 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 42 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 43, iters: 6, time: 0.359) G_GAN: 0.883 G_GAN_Feat: 0.781 G_VGG: 2.192 D_real: 0.861 D_fake: 0.294 \nEnd of epoch 43 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 44, iters: 49, time: 0.359) G_GAN: 1.131 G_GAN_Feat: 2.337 G_VGG: 2.160 D_real: 1.202 D_fake: 0.165 \nEnd of epoch 44 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 45 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 46, iters: 35, time: 0.360) G_GAN: 0.477 G_GAN_Feat: 1.364 G_VGG: 1.628 D_real: 0.452 D_fake: 0.568 \nEnd of epoch 46 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 47 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 48, iters: 21, time: 0.359) G_GAN: 1.083 G_GAN_Feat: 4.402 G_VGG: 7.121 D_real: 0.501 D_fake: 0.165 \nEnd of epoch 48 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 49 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 50, iters: 7, time: 0.360) G_GAN: 0.719 G_GAN_Feat: 1.453 G_VGG: 2.059 D_real: 0.610 D_fake: 0.348 \nEnd of epoch 50 / 200 \t Time Taken: 20 sec\nsaving the model at the end of epoch 50, iters 2850\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 51, iters: 50, time: 0.426) G_GAN: 0.458 G_GAN_Feat: 2.575 G_VGG: 1.952 D_real: 0.524 D_fake: 0.595 \nEnd of epoch 51 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 52 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 53, iters: 36, time: 0.359) G_GAN: 0.934 G_GAN_Feat: 5.285 G_VGG: 8.354 D_real: 0.259 D_fake: 0.217 \nsaving the latest model (epoch 53, total_steps 3000)\nEnd of epoch 53 / 200 \t Time Taken: 25 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 54 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 55, iters: 22, time: 0.361) G_GAN: 0.777 G_GAN_Feat: 0.795 G_VGG: 0.801 D_real: 0.763 D_fake: 0.323 \nEnd of epoch 55 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 56 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 57, iters: 8, time: 0.360) G_GAN: 0.872 G_GAN_Feat: 6.383 G_VGG: 6.119 D_real: 0.083 D_fake: 0.267 \nEnd of epoch 57 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 58, iters: 51, time: 0.359) G_GAN: 0.572 G_GAN_Feat: 3.302 G_VGG: 6.148 D_real: 0.280 D_fake: 0.520 \nEnd of epoch 58 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 59 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 60, iters: 37, time: 0.360) G_GAN: 0.833 G_GAN_Feat: 6.068 G_VGG: 6.941 D_real: 0.153 D_fake: 0.315 \nEnd of epoch 60 / 200 \t Time Taken: 20 sec\nsaving the model at the end of epoch 60, iters 3420\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 61 / 200 \t Time Taken: 21 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 62, iters: 23, time: 0.436) G_GAN: 1.090 G_GAN_Feat: 5.824 G_VGG: 4.137 D_real: 0.086 D_fake: 0.229 \nEnd of epoch 62 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 63 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 64, iters: 9, time: 0.359) G_GAN: 0.654 G_GAN_Feat: 6.488 G_VGG: 7.121 D_real: 0.107 D_fake: 0.416 \nEnd of epoch 64 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 65, iters: 52, time: 0.359) G_GAN: 0.653 G_GAN_Feat: 5.032 G_VGG: 7.503 D_real: 0.246 D_fake: 0.385 \nEnd of epoch 65 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 66 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 67, iters: 38, time: 0.360) G_GAN: 0.781 G_GAN_Feat: 5.965 G_VGG: 3.648 D_real: 0.143 D_fake: 0.342 \nEnd of epoch 67 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 68 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 69, iters: 24, time: 0.360) G_GAN: 0.522 G_GAN_Feat: 1.180 G_VGG: 2.604 D_real: 0.501 D_fake: 0.520 \nEnd of epoch 69 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 70 / 200 \t Time Taken: 20 sec\nsaving the model at the end of epoch 70, iters 3990\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 71, iters: 10, time: 0.427) G_GAN: 0.645 G_GAN_Feat: 6.521 G_VGG: 8.082 D_real: 0.034 D_fake: 0.412 \nsaving the latest model (epoch 71, total_steps 4000)\nEnd of epoch 71 / 200 \t Time Taken: 34 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 72, iters: 53, time: 0.358) G_GAN: 0.499 G_GAN_Feat: 1.242 G_VGG: 1.345 D_real: 0.414 D_fake: 0.528 \nEnd of epoch 72 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 73 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 74, iters: 39, time: 0.359) G_GAN: 0.480 G_GAN_Feat: 0.803 G_VGG: 0.693 D_real: 0.483 D_fake: 0.544 \nEnd of epoch 74 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 75 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 76, iters: 25, time: 0.359) G_GAN: 0.618 G_GAN_Feat: 5.500 G_VGG: 4.966 D_real: 0.043 D_fake: 0.480 \nEnd of epoch 76 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 77 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 78, iters: 11, time: 0.360) G_GAN: 0.939 G_GAN_Feat: 5.045 G_VGG: 3.700 D_real: 0.170 D_fake: 0.261 \nEnd of epoch 78 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 79, iters: 54, time: 0.360) G_GAN: 0.903 G_GAN_Feat: 4.287 G_VGG: 5.113 D_real: 0.407 D_fake: 0.302 \nEnd of epoch 79 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 80 / 200 \t Time Taken: 20 sec\nsaving the model at the end of epoch 80, iters 4560\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 81, iters: 40, time: 0.426) G_GAN: 0.676 G_GAN_Feat: 1.528 G_VGG: 3.798 D_real: 0.538 D_fake: 0.422 \nEnd of epoch 81 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 82 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 83, iters: 26, time: 0.359) G_GAN: 1.098 G_GAN_Feat: 3.803 G_VGG: 4.459 D_real: 0.506 D_fake: 0.203 \nEnd of epoch 83 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 84 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 85, iters: 12, time: 0.360) G_GAN: 1.398 G_GAN_Feat: 4.478 G_VGG: 3.582 D_real: 0.713 D_fake: 0.098 \nEnd of epoch 85 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 86, iters: 55, time: 0.359) G_GAN: 0.711 G_GAN_Feat: 4.982 G_VGG: 5.005 D_real: 0.741 D_fake: 0.381 \nEnd of epoch 86 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 87 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 88, iters: 41, time: 0.359) G_GAN: 0.402 G_GAN_Feat: 5.977 G_VGG: 6.786 D_real: 0.093 D_fake: 0.733 \nsaving the latest model (epoch 88, total_steps 5000)\nEnd of epoch 88 / 200 \t Time Taken: 25 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 89 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 90, iters: 27, time: 0.361) G_GAN: 0.540 G_GAN_Feat: 4.163 G_VGG: 4.441 D_real: 0.148 D_fake: 0.583 \nEnd of epoch 90 / 200 \t Time Taken: 20 sec\nsaving the model at the end of epoch 90, iters 5130\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 91 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 92, iters: 13, time: 0.426) G_GAN: 0.652 G_GAN_Feat: 1.562 G_VGG: 1.515 D_real: 0.494 D_fake: 0.436 \nEnd of epoch 92 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 93, iters: 56, time: 0.359) G_GAN: 0.848 G_GAN_Feat: 7.001 G_VGG: 8.180 D_real: 0.057 D_fake: 0.314 \nEnd of epoch 93 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 94 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 95, iters: 42, time: 0.360) G_GAN: 1.052 G_GAN_Feat: 2.830 G_VGG: 2.183 D_real: 0.564 D_fake: 0.210 \nEnd of epoch 95 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 96 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 97, iters: 28, time: 0.360) G_GAN: 1.209 G_GAN_Feat: 1.728 G_VGG: 2.975 D_real: 1.052 D_fake: 0.160 \nEnd of epoch 97 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 98 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 99, iters: 14, time: 0.360) G_GAN: 0.583 G_GAN_Feat: 1.600 G_VGG: 2.234 D_real: 0.401 D_fake: 0.505 \nEnd of epoch 99 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 100, iters: 57, time: 0.360) G_GAN: 0.372 G_GAN_Feat: 6.345 G_VGG: 4.282 D_real: 0.092 D_fake: 0.755 \nEnd of epoch 100 / 200 \t Time Taken: 20 sec\nsaving the model at the end of epoch 100, iters 5700\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 101 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 102, iters: 43, time: 0.360) G_GAN: 0.504 G_GAN_Feat: 1.940 G_VGG: 3.448 D_real: 0.431 D_fake: 0.549 \nEnd of epoch 102 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 103 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 104, iters: 29, time: 0.359) G_GAN: 0.839 G_GAN_Feat: 2.810 G_VGG: 4.479 D_real: 0.776 D_fake: 0.294 \nEnd of epoch 104 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 105 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 106, iters: 15, time: 0.359) G_GAN: 0.668 G_GAN_Feat: 2.181 G_VGG: 1.658 D_real: 0.543 D_fake: 0.398 \nsaving the latest model (epoch 106, total_steps 6000)\nEnd of epoch 106 / 200 \t Time Taken: 25 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 107 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 108, iters: 1, time: 0.360) G_GAN: 1.096 G_GAN_Feat: 3.796 G_VGG: 3.730 D_real: 0.411 D_fake: 0.218 \nEnd of epoch 108 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 109, iters: 44, time: 0.359) G_GAN: 0.543 G_GAN_Feat: 5.745 G_VGG: 5.129 D_real: 0.141 D_fake: 0.567 \nEnd of epoch 109 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 110 / 200 \t Time Taken: 20 sec\nsaving the model at the end of epoch 110, iters 6270\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 111, iters: 30, time: 0.428) G_GAN: 0.865 G_GAN_Feat: 2.039 G_VGG: 3.284 D_real: 0.661 D_fake: 0.291 \nEnd of epoch 111 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 112 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 113, iters: 16, time: 0.359) G_GAN: 0.632 G_GAN_Feat: 1.590 G_VGG: 1.543 D_real: 0.440 D_fake: 0.448 \nEnd of epoch 113 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 114 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 115, iters: 2, time: 0.359) G_GAN: 1.136 G_GAN_Feat: 2.021 G_VGG: 1.310 D_real: 0.826 D_fake: 0.198 \nEnd of epoch 115 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 116, iters: 45, time: 0.359) G_GAN: 0.660 G_GAN_Feat: 2.442 G_VGG: 3.399 D_real: 0.386 D_fake: 0.399 \nEnd of epoch 116 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 117 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 118, iters: 31, time: 0.360) G_GAN: 0.663 G_GAN_Feat: 2.730 G_VGG: 2.047 D_real: 0.479 D_fake: 0.458 \nEnd of epoch 118 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 119 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 120, iters: 17, time: 0.360) G_GAN: 0.745 G_GAN_Feat: 1.585 G_VGG: 1.386 D_real: 0.854 D_fake: 0.385 \nEnd of epoch 120 / 200 \t Time Taken: 20 sec\nsaving the model at the end of epoch 120, iters 6840\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 121 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 122, iters: 3, time: 0.427) G_GAN: 0.850 G_GAN_Feat: 2.655 G_VGG: 1.526 D_real: 0.544 D_fake: 0.313 \nEnd of epoch 122 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 123, iters: 46, time: 0.359) G_GAN: 0.477 G_GAN_Feat: 4.084 G_VGG: 3.842 D_real: 0.164 D_fake: 0.699 \nsaving the latest model (epoch 123, total_steps 7000)\nEnd of epoch 123 / 200 \t Time Taken: 25 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 124 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 125, iters: 32, time: 0.361) G_GAN: 0.749 G_GAN_Feat: 3.107 G_VGG: 3.168 D_real: 0.432 D_fake: 0.399 \nEnd of epoch 125 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 126 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 127, iters: 18, time: 0.360) G_GAN: 0.584 G_GAN_Feat: 2.765 G_VGG: 2.392 D_real: 0.344 D_fake: 0.491 \nEnd of epoch 127 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 128 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 129, iters: 4, time: 0.360) G_GAN: 1.154 G_GAN_Feat: 3.599 G_VGG: 3.864 D_real: 0.429 D_fake: 0.269 \nEnd of epoch 129 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 130, iters: 47, time: 0.359) G_GAN: 0.812 G_GAN_Feat: 4.623 G_VGG: 3.781 D_real: 0.460 D_fake: 0.361 \nEnd of epoch 130 / 200 \t Time Taken: 20 sec\nsaving the model at the end of epoch 130, iters 7410\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 131 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 132, iters: 33, time: 0.427) G_GAN: 0.651 G_GAN_Feat: 3.952 G_VGG: 3.384 D_real: 0.512 D_fake: 0.498 \nEnd of epoch 132 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 133 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 134, iters: 19, time: 0.359) G_GAN: 0.710 G_GAN_Feat: 2.976 G_VGG: 1.705 D_real: 0.636 D_fake: 0.382 \nEnd of epoch 134 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 135 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 136, iters: 5, time: 0.360) G_GAN: 0.532 G_GAN_Feat: 4.508 G_VGG: 3.195 D_real: 0.238 D_fake: 0.533 \nEnd of epoch 136 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 137, iters: 48, time: 0.359) G_GAN: 0.985 G_GAN_Feat: 4.029 G_VGG: 3.510 D_real: 0.470 D_fake: 0.241 \nEnd of epoch 137 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 138 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 139, iters: 34, time: 0.360) G_GAN: 1.195 G_GAN_Feat: 2.387 G_VGG: 2.336 D_real: 0.730 D_fake: 0.130 \nEnd of epoch 139 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 140 / 200 \t Time Taken: 20 sec\nsaving the model at the end of epoch 140, iters 7980\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 141, iters: 20, time: 0.428) G_GAN: 0.849 G_GAN_Feat: 4.874 G_VGG: 2.936 D_real: 0.838 D_fake: 0.324 \nsaving the latest model (epoch 141, total_steps 8000)\nEnd of epoch 141 / 200 \t Time Taken: 31 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 142 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 143, iters: 6, time: 0.358) G_GAN: 1.034 G_GAN_Feat: 3.967 G_VGG: 3.188 D_real: 0.573 D_fake: 0.266 \nEnd of epoch 143 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 144, iters: 49, time: 0.359) G_GAN: 1.040 G_GAN_Feat: 4.202 G_VGG: 2.897 D_real: 0.538 D_fake: 0.275 \nEnd of epoch 144 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 145 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 146, iters: 35, time: 0.360) G_GAN: 0.959 G_GAN_Feat: 4.766 G_VGG: 3.348 D_real: 0.355 D_fake: 0.243 \nEnd of epoch 146 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 147 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 148, iters: 21, time: 0.360) G_GAN: 0.461 G_GAN_Feat: 2.824 G_VGG: 1.388 D_real: 0.279 D_fake: 0.606 \nEnd of epoch 148 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 149 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 150, iters: 7, time: 0.360) G_GAN: 1.406 G_GAN_Feat: 3.731 G_VGG: 3.191 D_real: 0.968 D_fake: 0.098 \nEnd of epoch 150 / 200 \t Time Taken: 20 sec\nsaving the model at the end of epoch 150, iters 8550\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 151, iters: 50, time: 0.424) G_GAN: 0.344 G_GAN_Feat: 3.279 G_VGG: 1.374 D_real: 0.214 D_fake: 0.764 \nEnd of epoch 151 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 152 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 153, iters: 36, time: 0.359) G_GAN: 0.800 G_GAN_Feat: 3.094 G_VGG: 1.307 D_real: 0.507 D_fake: 0.333 \nEnd of epoch 153 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 154 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 155, iters: 22, time: 0.359) G_GAN: 0.707 G_GAN_Feat: 2.290 G_VGG: 1.249 D_real: 0.479 D_fake: 0.369 \nEnd of epoch 155 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 156 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 157, iters: 8, time: 0.359) G_GAN: 0.593 G_GAN_Feat: 2.281 G_VGG: 1.210 D_real: 0.441 D_fake: 0.502 \nEnd of epoch 157 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 158, iters: 51, time: 0.359) G_GAN: 0.340 G_GAN_Feat: 1.976 G_VGG: 1.019 D_real: 0.239 D_fake: 0.749 \nsaving the latest model (epoch 158, total_steps 9000)\nEnd of epoch 158 / 200 \t Time Taken: 25 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 159 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 160, iters: 37, time: 0.362) G_GAN: 0.618 G_GAN_Feat: 2.206 G_VGG: 1.114 D_real: 0.450 D_fake: 0.464 \nEnd of epoch 160 / 200 \t Time Taken: 20 sec\nsaving the model at the end of epoch 160, iters 9120\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 161 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 162, iters: 23, time: 0.429) G_GAN: 0.964 G_GAN_Feat: 2.950 G_VGG: 1.062 D_real: 0.598 D_fake: 0.235 \nEnd of epoch 162 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 163 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 164, iters: 9, time: 0.359) G_GAN: 1.622 G_GAN_Feat: 6.265 G_VGG: 3.170 D_real: 0.279 D_fake: 0.071 \nEnd of epoch 164 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 165, iters: 52, time: 0.360) G_GAN: 0.858 G_GAN_Feat: 5.470 G_VGG: 4.017 D_real: 0.302 D_fake: 0.310 \nEnd of epoch 165 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 166 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 167, iters: 38, time: 0.360) G_GAN: 0.671 G_GAN_Feat: 4.080 G_VGG: 3.113 D_real: 0.343 D_fake: 0.412 \nEnd of epoch 167 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 168 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 169, iters: 24, time: 0.360) G_GAN: 0.648 G_GAN_Feat: 3.251 G_VGG: 2.636 D_real: 0.346 D_fake: 0.436 \nEnd of epoch 169 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 170 / 200 \t Time Taken: 20 sec\nsaving the model at the end of epoch 170, iters 9690\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 171, iters: 10, time: 0.433) G_GAN: 0.654 G_GAN_Feat: 1.399 G_VGG: 0.606 D_real: 0.522 D_fake: 0.419 \nEnd of epoch 171 / 200 \t Time Taken: 21 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 172, iters: 53, time: 0.358) G_GAN: 0.487 G_GAN_Feat: 3.549 G_VGG: 2.752 D_real: 0.267 D_fake: 0.605 \nEnd of epoch 172 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 173 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 174, iters: 39, time: 0.359) G_GAN: 0.511 G_GAN_Feat: 4.059 G_VGG: 2.686 D_real: 0.468 D_fake: 0.550 \nEnd of epoch 174 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 175 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 176, iters: 25, time: 0.360) G_GAN: 0.754 G_GAN_Feat: 4.103 G_VGG: 2.821 D_real: 0.443 D_fake: 0.348 \nsaving the latest model (epoch 176, total_steps 10000)\nEnd of epoch 176 / 200 \t Time Taken: 25 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 177 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 178, iters: 11, time: 0.361) G_GAN: 0.971 G_GAN_Feat: 5.300 G_VGG: 4.191 D_real: 0.138 D_fake: 0.315 \nEnd of epoch 178 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 179, iters: 54, time: 0.359) G_GAN: 0.827 G_GAN_Feat: 3.391 G_VGG: 2.443 D_real: 0.408 D_fake: 0.362 \nEnd of epoch 179 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 180 / 200 \t Time Taken: 20 sec\nsaving the model at the end of epoch 180, iters 10260\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 181, iters: 40, time: 0.461) G_GAN: 0.365 G_GAN_Feat: 3.240 G_VGG: 1.082 D_real: 0.250 D_fake: 0.774 \nEnd of epoch 181 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 182 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 183, iters: 26, time: 0.359) G_GAN: 0.802 G_GAN_Feat: 3.095 G_VGG: 2.195 D_real: 0.552 D_fake: 0.328 \nEnd of epoch 183 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 184 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 185, iters: 12, time: 0.360) G_GAN: 0.750 G_GAN_Feat: 2.628 G_VGG: 1.005 D_real: 0.435 D_fake: 0.336 \nEnd of epoch 185 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 186, iters: 55, time: 0.359) G_GAN: 0.328 G_GAN_Feat: 8.812 G_VGG: 3.717 D_real: 0.383 D_fake: 0.820 \nEnd of epoch 186 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 187 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 188, iters: 41, time: 0.360) G_GAN: 0.740 G_GAN_Feat: 2.516 G_VGG: 0.883 D_real: 0.539 D_fake: 0.377 \nEnd of epoch 188 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 189 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 190, iters: 27, time: 0.360) G_GAN: 0.390 G_GAN_Feat: 3.532 G_VGG: 2.420 D_real: 0.217 D_fake: 0.683 \nEnd of epoch 190 / 200 \t Time Taken: 20 sec\nsaving the model at the end of epoch 190, iters 10830\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 191 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 192, iters: 13, time: 0.429) G_GAN: 0.514 G_GAN_Feat: 3.106 G_VGG: 1.700 D_real: 0.379 D_fake: 0.543 \nEnd of epoch 192 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 193, iters: 56, time: 0.359) G_GAN: 0.737 G_GAN_Feat: 2.508 G_VGG: 0.951 D_real: 0.498 D_fake: 0.355 \nsaving the latest model (epoch 193, total_steps 11000)\nEnd of epoch 193 / 200 \t Time Taken: 25 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 194 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 195, iters: 42, time: 0.361) G_GAN: 1.696 G_GAN_Feat: 5.839 G_VGG: 2.467 D_real: 0.299 D_fake: 0.070 \nEnd of epoch 195 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 196 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 197, iters: 28, time: 0.360) G_GAN: 1.035 G_GAN_Feat: 4.214 G_VGG: 2.758 D_real: 0.370 D_fake: 0.213 \nEnd of epoch 197 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\nEnd of epoch 198 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 199, iters: 14, time: 0.360) G_GAN: 0.648 G_GAN_Feat: 4.807 G_VGG: 3.494 D_real: 0.360 D_fake: 0.438 \nEnd of epoch 199 / 200 \t Time Taken: 20 sec\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:317: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n/usr/local/lib/python3.7/dist-packages/torchvision/transforms/transforms.py:288: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.\n \"Argument interpolation should be of type InterpolationMode instead of int. \"\n(epoch: 200, iters: 57, time: 0.359) G_GAN: 0.974 G_GAN_Feat: 3.183 G_VGG: 2.291 D_real: 0.963 D_fake: 0.240 \nEnd of epoch 200 / 200 \t Time Taken: 20 sec\nsaving the model at the end of epoch 200, iters 11400\n"
],
[
"# !python train.py --name apples_trash --dataroot /content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/Trash --checkpoints_dir /content/gdrive/MyDrive/Images_for_GAN/checkpoints --norm batch --loadSize 512 --fineSize 512 --label_nc 0 --no_instance\n!python train.py --name apples_train_1 --dataroot /content/gdrive/MyDrive/Images_for_GAN/apples_RGB_NIR/Trash --label_nc 0 --no_instance --loadSize 320 --fineSize 160 --resize_or_crop resize_and_crop",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0db580660aebe96d31ad0325335846a2de03bd7 | 1,642 | ipynb | Jupyter Notebook | histogram.ipynb | dr-kinder/jupyter-notebooks | 621c70e7e428bf8d71132a962ceab2192e8427c5 | [
"BSD-3-Clause"
] | 1 | 2021-12-04T14:04:53.000Z | 2021-12-04T14:04:53.000Z | histogram.ipynb | dr-kinder/jupyter-notebooks | 621c70e7e428bf8d71132a962ceab2192e8427c5 | [
"BSD-3-Clause"
] | null | null | null | histogram.ipynb | dr-kinder/jupyter-notebooks | 621c70e7e428bf8d71132a962ceab2192e8427c5 | [
"BSD-3-Clause"
] | null | null | null | 28.807018 | 85 | 0.495737 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d0db629251ba71a689b69ad130f6d724565523f0 | 267,024 | ipynb | Jupyter Notebook | examples/overlays.ipynb | choldgraf/geopandas | 1731e44b2df88d08adfbc09260dda86d3d35e91d | [
"BSD-3-Clause"
] | 3 | 2015-03-03T21:08:39.000Z | 2015-12-14T23:22:47.000Z | examples/overlays.ipynb | choldgraf/geopandas | 1731e44b2df88d08adfbc09260dda86d3d35e91d | [
"BSD-3-Clause"
] | 1 | 2017-07-30T12:49:42.000Z | 2018-01-06T22:15:22.000Z | examples/overlays.ipynb | choldgraf/geopandas | 1731e44b2df88d08adfbc09260dda86d3d35e91d | [
"BSD-3-Clause"
] | 2 | 2021-01-02T02:25:31.000Z | 2021-01-10T16:41:32.000Z | 444.299501 | 46,605 | 0.920224 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d0db6ffb5f2a09d9ff2b27e57ad7993f069add6a | 142,123 | ipynb | Jupyter Notebook | notebooks/BayesianInputShaper/FmfnLoad.ipynb | valikund/valikund.github.io | 5d41879e98286e948c476275ec128f596aa40f21 | [
"MIT"
] | null | null | null | notebooks/BayesianInputShaper/FmfnLoad.ipynb | valikund/valikund.github.io | 5d41879e98286e948c476275ec128f596aa40f21 | [
"MIT"
] | null | null | null | notebooks/BayesianInputShaper/FmfnLoad.ipynb | valikund/valikund.github.io | 5d41879e98286e948c476275ec128f596aa40f21 | [
"MIT"
] | null | null | null | 82.057159 | 1,963 | 0.80182 | [
[
[
"from bayes_opt import BayesianOptimization\nfrom bayes_opt.util import load_logs\n\nfrom sklearn.gaussian_process import GaussianProcessRegressor\nfrom sklearn.gaussian_process.kernels import Matern, RBF\nimport json\nimport numpy as np\nfrom itertools import product\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib.animation import FuncAnimation\n%matplotlib inline\nfrom IPython.display import HTML\n",
"_____no_output_____"
],
[
"gp =GaussianProcessRegressor(\n kernel=Matern(length_scale= 10),#RBF(length_scale=[0.05, 1]),\n alpha=1e-6,\n normalize_y=True,\n n_restarts_optimizer=5,\n random_state=5\n )",
"_____no_output_____"
],
[
"#Load data\nlog = \"bad_logs.json\"\nx = []\ny = []\nwith open(log, \"r\") as f:\n for line in f:\n line = json.loads(line)\n y.append(line[\"target\"])\n x.append([line[\"params\"][\"a1\"],line[\"params\"][\"t1\"]])\nx = np.array(x)",
"_____no_output_____"
],
[
"model = gp.fit(x,y)",
"_____no_output_____"
],
[
"label_x = \"A1\"\nlabel_y = \"T1\"\nbounds = [[0, 1.5],[300,1000]]\n\nX1 = np.linspace(0, 1.5, 200)\nX2 = np.linspace(300, 1000, 200)\nx1, x2 = np.meshgrid(X1, X2)\nX = np.hstack((x1.reshape(200*200,1),x2.reshape(200*200,1)))\n\nfig = plt.figure(figsize=(13,5))\n\n\ndef update(i):\n fig.clear()\n ax1 = fig.add_subplot(121)\n ax2 = fig.add_subplot(122)\n \n gp.fit(x[:i],y[:i])\n m, v = gp.predict(X, return_std=True)\n\n cf1 = ax1.contourf(X1, X2, m.reshape(200,200),100)\n ax1.plot(x[:i-1,0], x[:i-1,1], 'r.', markersize=10, label=u'Observations')\n ax1.plot(x[i,0], x[i,1], 'r.', markersize=25, label=u'New Point')\n cb1 = fig.colorbar(cf1, ax=ax1)\n ax1.set_xlabel(label_x)\n ax1.set_ylabel(label_y)\n \n ax1.set_title('Posterior mean')\n ##\n ax2.plot(x[i,0], x[i,1], 'r.', markersize=25, label=u'New Point')\n ax2.plot(x[:i-1,0], x[:i-1,1], 'r.', markersize=10, label=u'Observations')\n cf2 = ax2.contourf(X1, X2, np.sqrt(v.reshape(200,200)),100)\n cb2 = fig.colorbar(cf2, ax=ax2)\n ax2.set_xlabel(label_x)\n ax2.set_ylabel(label_y)\n \n ax2.set_title('Posterior sd.')\n return ax1, ax2",
"_____no_output_____"
],
[
"from mpl_toolkits import mplot3d\nfrom matplotlib import cm\n\ndef update3d(i):\n fig.clear()\n ax1 = fig.add_subplot(121, projection='3d')\n ax2 = fig.add_subplot(122, projection='3d')\n \n gp.fit(x[:i],y[:i])\n m, v = gp.predict(X, return_std=True)\n ax1.plot_surface(X1, X2, m.reshape(200,200), 50, cmap=cm.coolwarm)\n ax1.set_xlabel(label_x)\n ax1.set_ylabel(label_y)\n ax1.set_zticks()\n ax1.set_title('Posterior mean')\n ##\n ax2.plot_surface(X1, X2, v.reshape(200,200), 50, cmap=cm.coolwarm)\n ax2.set_xlabel(label_x)\n ax2.set_ylabel(label_y)\n ax2.set_zticks()\n ax2.set_title('Posterior sd.')\n return ax1, ax2",
"_____no_output_____"
],
[
"import matplotlib.animation as animation\nanim = FuncAnimation(fig, update, frames=np.arange(3, x.shape[0]), interval=500)\nanim.save('line.gif', dpi=80, writer='imagemagick')\n#plt.show()",
"_____no_output_____"
],
[
"HTML(anim.to_html5_video())",
"_____no_output_____"
],
[
"ax1 = fig.add_subplot(121, projection='3d')\nax2 = fig.add_subplot(122, projection='3d')\n\ngp.fit(x[:10],y[:10])\nm, v = gp.predict(X, return_std=True)\nprint(m.reshape(200,200).shape)\nax1.plot_surface(x1, x2, m.reshape(200,200), 50)\nax1.set_xlabel(label_x)\nax1.set_ylabel(label_y)\nax1.set_title('Posterior mean')\n##\nax2.plot_surface(x1, x2, v.reshape(200,200), 50)\nax2.set_xlabel(label_x)\nax2.set_ylabel(label_y)\nax2.set_title('Posterior sd.')\nplt.show",
"(200, 200)\n"
],
[
"def prediction(a1, t1):\n m, std = model.predict([[a1, t1]], return_std=True)\n return m[0]",
"_____no_output_____"
],
[
"prediction(0.8, 730)",
"_____no_output_____"
],
[
"from bayes_opt import BayesianOptimization\nfrom bayes_opt.observer import JSONLogger, ScreenLogger\nfrom bayes_opt.event import Events\n\nlogger = JSONLogger(path=\"bad_logs.json\")\n\noptimizer = BayesianOptimization(\n prediction,\n {'a1': (0, 1.5),\n 't1': (300, 1000)})\noptimizer.subscribe(Events.OPTMIZATION_STEP, logger)\noptimizer.subscribe(Events.OPTMIZATION_STEP, ScreenLogger(verbose=2))\noptimizer.set_gp_params(kernel=Matern(length_scale= 10))\noptimizer.maximize( init_points=5,\n n_iter=30, acq=\"ucb\", kappa=5.0)",
"| \u001b[0m 1 \u001b[0m | \u001b[0m-12.82 \u001b[0m | \u001b[0m 1.11 \u001b[0m | \u001b[0m 330.2 \u001b[0m |\n| \u001b[95m 2 \u001b[0m | \u001b[95m-6.401 \u001b[0m | \u001b[95m 0.9573 \u001b[0m | \u001b[95m 642.6 \u001b[0m |\n| \u001b[95m 3 \u001b[0m | \u001b[95m-5.45 \u001b[0m | \u001b[95m 0.8537 \u001b[0m | \u001b[95m 959.8 \u001b[0m |\n| \u001b[0m 4 \u001b[0m | \u001b[0m-11.03 \u001b[0m | \u001b[0m 0.8221 \u001b[0m | \u001b[0m 326.9 \u001b[0m |\n| \u001b[0m 5 \u001b[0m | \u001b[0m-7.451 \u001b[0m | \u001b[0m 1.169 \u001b[0m | \u001b[0m 625.5 \u001b[0m |\n| \u001b[0m 6 \u001b[0m | \u001b[0m-6.931 \u001b[0m | \u001b[0m 1.5 \u001b[0m | \u001b[0m 963.7 \u001b[0m |\n| \u001b[0m 7 \u001b[0m | \u001b[0m-6.928 \u001b[0m | \u001b[0m 1.5 \u001b[0m | \u001b[0m 955.3 \u001b[0m |\n| \u001b[0m 8 \u001b[0m | \u001b[0m-6.803 \u001b[0m | \u001b[0m 1.035 \u001b[0m | \u001b[0m 636.3 \u001b[0m |\n| \u001b[0m 9 \u001b[0m | \u001b[0m-7.124 \u001b[0m | \u001b[0m 0.0 \u001b[0m | \u001b[0m 649.7 \u001b[0m |\n| \u001b[0m 10 \u001b[0m | \u001b[0m-7.432 \u001b[0m | \u001b[0m 1.5 \u001b[0m | \u001b[0m 660.2 \u001b[0m |\n| \u001b[0m 11 \u001b[0m | \u001b[0m-7.108 \u001b[0m | \u001b[0m 0.0 \u001b[0m | \u001b[0m 672.1 \u001b[0m |\n| \u001b[0m 12 \u001b[0m | \u001b[0m-7.142 \u001b[0m | \u001b[0m 1.5 \u001b[0m | \u001b[0m 682.1 \u001b[0m |\n| \u001b[0m 13 \u001b[0m | \u001b[0m-7.109 \u001b[0m | \u001b[0m 0.0 \u001b[0m | \u001b[0m 692.7 \u001b[0m |\n| \u001b[0m 14 \u001b[0m | \u001b[0m-6.858 \u001b[0m | \u001b[0m 1.5 \u001b[0m | \u001b[0m 703.3 \u001b[0m |\n| \u001b[0m 15 \u001b[0m | \u001b[0m-7.123 \u001b[0m | \u001b[0m 0.0 \u001b[0m | \u001b[0m 712.8 \u001b[0m |\n| \u001b[0m 16 \u001b[0m | \u001b[0m-6.586 \u001b[0m | \u001b[0m 1.5 \u001b[0m | \u001b[0m 724.4 \u001b[0m |\n| \u001b[0m 17 \u001b[0m | \u001b[0m-7.146 \u001b[0m | \u001b[0m 0.0 \u001b[0m | \u001b[0m 733.1 \u001b[0m |\n| \u001b[0m 18 \u001b[0m | \u001b[0m-6.34 \u001b[0m | \u001b[0m 1.5 \u001b[0m | \u001b[0m 746.3 \u001b[0m |\n| \u001b[0m 19 \u001b[0m | \u001b[0m-7.175 \u001b[0m | \u001b[0m 0.0 \u001b[0m | \u001b[0m 754.3 \u001b[0m |\n| \u001b[0m 20 \u001b[0m | \u001b[0m-6.149 \u001b[0m | \u001b[0m 1.5 \u001b[0m | \u001b[0m 770.4 \u001b[0m |\n| \u001b[0m 21 \u001b[0m | \u001b[0m-7.201 \u001b[0m | \u001b[0m 0.0 \u001b[0m | \u001b[0m 778.0 \u001b[0m |\n| \u001b[0m 22 \u001b[0m | \u001b[0m-7.187 \u001b[0m | \u001b[0m 0.0 \u001b[0m | \u001b[0m 763.8 \u001b[0m |\n| \u001b[0m 23 \u001b[0m | \u001b[0m-8.58 \u001b[0m | \u001b[0m 0.7049 \u001b[0m | \u001b[0m 486.4 \u001b[0m |\n| \u001b[0m 24 \u001b[0m | \u001b[0m-6.462 \u001b[0m | \u001b[0m 1.375 \u001b[0m | \u001b[0m 862.7 \u001b[0m |\n| \u001b[0m 25 \u001b[0m | \u001b[0m-5.803 \u001b[0m | \u001b[0m 1.168 \u001b[0m | \u001b[0m 871.6 \u001b[0m |\n| \u001b[0m 26 \u001b[0m | \u001b[0m-7.13 \u001b[0m | \u001b[0m 0.0 \u001b[0m | \u001b[0m 878.8 \u001b[0m |\n| \u001b[0m 27 \u001b[0m | \u001b[0m-7.157 \u001b[0m | \u001b[0m 0.0 \u001b[0m | \u001b[0m 852.9 \u001b[0m |\n| \u001b[0m 28 \u001b[0m | \u001b[0m-6.297 \u001b[0m | \u001b[0m 1.5 \u001b[0m | \u001b[0m 835.1 \u001b[0m |\n| \u001b[0m 29 \u001b[0m | \u001b[0m-7.184 \u001b[0m | \u001b[0m 0.0 \u001b[0m | \u001b[0m 826.5 \u001b[0m |\n| \u001b[0m 30 \u001b[0m | \u001b[0m-7.167 \u001b[0m | \u001b[0m 0.0 \u001b[0m | \u001b[0m 842.6 \u001b[0m |\n| \u001b[0m 31 \u001b[0m | \u001b[0m-7.767 \u001b[0m | \u001b[0m 0.04711 \u001b[0m | \u001b[0m 410.4 \u001b[0m |\n| \u001b[0m 32 \u001b[0m | \u001b[0m-7.313 \u001b[0m | \u001b[0m 0.002745\u001b[0m | \u001b[0m 558.6 \u001b[0m |\n| \u001b[0m 33 \u001b[0m | \u001b[0m-6.795 \u001b[0m | \u001b[0m 1.443 \u001b[0m | \u001b[0m 915.2 \u001b[0m |\n| \u001b[0m 34 \u001b[0m | \u001b[0m-7.045 \u001b[0m | \u001b[0m 0.0 \u001b[0m | \u001b[0m 926.5 \u001b[0m |\n| \u001b[0m 35 \u001b[0m | \u001b[0m-7.097 \u001b[0m | \u001b[0m 0.0 \u001b[0m | \u001b[0m 903.8 \u001b[0m |\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0db9b739f165222e71842c00796b2af4d65a243 | 17,422 | ipynb | Jupyter Notebook | Image/texture.ipynb | guy1ziv2/earthengine-py-notebooks | 931f57c61c147fe6cff745c2a099a444716e69e4 | [
"MIT"
] | 1 | 2020-07-14T10:45:09.000Z | 2020-07-14T10:45:09.000Z | Image/texture.ipynb | Yesicaleo/earthengine-py-notebooks | b737a889d5023408cc5cec204f8bd5f9d51cdee8 | [
"MIT"
] | null | null | null | Image/texture.ipynb | Yesicaleo/earthengine-py-notebooks | b737a889d5023408cc5cec204f8bd5f9d51cdee8 | [
"MIT"
] | 1 | 2021-08-12T12:19:37.000Z | 2021-08-12T12:19:37.000Z | 83.759615 | 10,660 | 0.831305 | [
[
[
"<table class=\"ee-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://github.com/giswqs/earthengine-py-notebooks/tree/master/Image/texture.ipynb\"><img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /> View source on GitHub</a></td>\n <td><a target=\"_blank\" href=\"https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Image/texture.ipynb\"><img width=26px src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png\" />Notebook Viewer</a></td>\n <td><a target=\"_blank\" href=\"https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Image/texture.ipynb\"><img width=58px src=\"https://mybinder.org/static/images/logo_social.png\" />Run in binder</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Image/texture.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /> Run in Google Colab</a></td>\n</table>",
"_____no_output_____"
],
[
"## Install Earth Engine API\nInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.\nThe magic command `%%capture` can be used to hide output from a specific cell.",
"_____no_output_____"
]
],
[
[
"# %%capture\n# !pip install earthengine-api\n# !pip install geehydro",
"_____no_output_____"
]
],
[
[
"Import libraries",
"_____no_output_____"
]
],
[
[
"import ee\nimport folium\nimport geehydro",
"_____no_output_____"
]
],
[
[
"Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` \nif you are running this notebook for this first time or if you are getting an authentication error. ",
"_____no_output_____"
]
],
[
[
"# ee.Authenticate()\nee.Initialize()",
"_____no_output_____"
]
],
[
[
"## Create an interactive map \nThis step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. \nThe optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.",
"_____no_output_____"
]
],
[
[
"Map = folium.Map(location=[40, -100], zoom_start=4)\nMap.setOptions('HYBRID')",
"_____no_output_____"
]
],
[
[
"## Add Earth Engine Python script ",
"_____no_output_____"
]
],
[
[
"import math\n\n# Load a high-resolution NAIP image.\nimage = ee.Image('USDA/NAIP/DOQQ/m_3712213_sw_10_1_20140613')\n\n# Zoom to San Francisco, display.\nMap.setCenter(-122.466123, 37.769833, 17)\nMap.addLayer(image, {'max': 255}, 'image')\n\n# Get the NIR band.\nnir = image.select('N')\n\n# Define a neighborhood with a kernel.\nsquare = ee.Kernel.square(**{'radius': 4})\n\n# Compute entropy and display.\nentropy = nir.entropy(square)\nMap.addLayer(entropy,\n {'min': 1, 'max': 5, 'palette': ['0000CC', 'CC0000']},\n 'entropy')\n\n# Compute the gray-level co-occurrence matrix (GLCM), get contrast.\nglcm = nir.glcmTexture(**{'size': 4})\ncontrast = glcm.select('N_contrast')\nMap.addLayer(contrast,\n {'min': 0, 'max': 1500, 'palette': ['0000CC', 'CC0000']},\n 'contrast')\n\n# Create a list of weights for a 9x9 kernel.\nlist = [1, 1, 1, 1, 1, 1, 1, 1, 1]\n# The center of the kernel is zero.\ncenterList = [1, 1, 1, 1, 0, 1, 1, 1, 1]\n# Assemble a list of lists: the 9x9 kernel weights as a 2-D matrix.\nlists = [list, list, list, list, centerList, list, list, list, list]\n# Create the kernel from the weights.\n# Non-zero weights represent the spatial neighborhood.\nkernel = ee.Kernel.fixed(9, 9, lists, -4, -4, False)\n\n# Convert the neighborhood into multiple bands.\nneighs = nir.neighborhoodToBands(kernel)\n\n# Compute local Geary's C, a measure of spatial association.\ngearys = nir.subtract(neighs).pow(2).reduce(ee.Reducer.sum()) \\\n .divide(math.pow(9, 2))\nMap.addLayer(gearys,\n {'min': 20, 'max': 2500, 'palette': ['0000CC', 'CC0000']},\n \"Geary's C\")\n\n",
"_____no_output_____"
]
],
[
[
"## Display Earth Engine data layers ",
"_____no_output_____"
]
],
[
[
"Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)\nMap",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0dba472bc77acd38ddb8f162a5e0610d577e111 | 21,578 | ipynb | Jupyter Notebook | Day02_titanic_demo.ipynb | JoyiWaston/Machine_Learning | 16d9da0606febc34f07026fac50184ba7d2eba23 | [
"W3C"
] | null | null | null | Day02_titanic_demo.ipynb | JoyiWaston/Machine_Learning | 16d9da0606febc34f07026fac50184ba7d2eba23 | [
"W3C"
] | null | null | null | Day02_titanic_demo.ipynb | JoyiWaston/Machine_Learning | 16d9da0606febc34f07026fac50184ba7d2eba23 | [
"W3C"
] | null | null | null | 41.100952 | 1,330 | 0.498656 | [
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"# 1.获取数据集\ntitanic = pd.read_csv(\"./ins/titanic.txt\")",
"_____no_output_____"
],
[
"titanic.head()",
"_____no_output_____"
],
[
"# 筛选特征值和目标值\nx = titanic[[\"pclass\", \"age\", \"sex\"]]\ny = titanic[\"survived\"]",
"_____no_output_____"
],
[
"# 2.数据处理\n# 1)缺失值处理\nx[\"age\"].fillna(x[\"age\"].mean(), inplace = True)",
"J:\\Anaconda\\lib\\site-packages\\pandas\\core\\generic.py:6392: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n return self._update_inplace(result)\n"
],
[
"# 2)转换成字典\nx = x.to_dict(orient=\"records\")",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\n# 3.数据集划分\nx_train, x_test, y_train, y_test = train_test_split(x, y, random_state=6)",
"_____no_output_____"
],
[
"# 4.字典特征抽取\nfrom sklearn.feature_extraction import DictVectorizer\ntransfer = DictVectorizer()\nx_train = transfer.fit_transform(x_train)\nx_test = transfer.transform(x_test)",
"_____no_output_____"
],
[
"from sklearn.tree import DecisionTreeClassifier, export_graphviz\nimport graphviz\n# 5.决策树预估器\nestimator = DecisionTreeClassifier(criterion=\"entropy\", max_depth=8)\nestimator.fit(x_train, y_train)\n\n# 6.模型评估\n# 方法一:直接比对真实值\ny_predict = estimator.predict(x_test)\nprint(\"y_predict:\\n\", y_predict)\nprint(\"直接比对真实值和预测值:\\n\", y_test == y_predict)\n\n# 方法二:计算准确率\nscore = estimator.score(x_test, y_test)\nprint(\"准确率为:\\n\", score)\n\n# 7.可视化决策树\ndot_data = export_graphviz(estimator, out_file=None, feature_names=transfer.get_feature_names())\ngraph = graphviz.Source(dot_data)\ngraph.render(\"titanic_tree可视化\")",
"y_predict:\n [0 0 0 0 0 0 0 0 1 0 1 0 0 0 1 1 0 1 0 0 1 0 0 0 1 0 0 1 1 0 0 0 0 0 0 0 0\n 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 1 1 1 0 1 1 0 0 0 0 0 1 0 0 0\n 0 0 0 0 0 0 0 0 0 1 0 0 1 1 0 1 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1\n 0 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 1 1 0 0 0 0 0 0 0 1 0\n 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0\n 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 1 0 0 0 0 0 0 1 0 0 1 0\n 0 0 1 0 0 1 1 0 0 0 0 0 1 0 1 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0\n 1 1 0 1 0 0 0 1 1 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0\n 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 1 0 1 1 0 1 1 0 0 0 0 0]\n直接比对真实值和预测值:\n 975 True\n459 True\n1251 False\n14 True\n805 True\n ... \n1289 False\n706 True\n1138 True\n307 True\n1183 True\nName: survived, Length: 329, dtype: bool\n准确率为:\n 0.8419452887537994\n"
],
[
"###随机森林对泰坦尼克号乘客的生存进行预测",
"_____no_output_____"
],
[
"from sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import GridSearchCV",
"_____no_output_____"
],
[
"estimator = RandomForestClassifier()",
"_____no_output_____"
],
[
"# 加入网格搜索和交叉验证\n# 参数准备\nparam_dict = {\"n_estimators\": [120, 200, 300, 500, 800, 1200], \"max_depth\":[5, 8, 15, 25, 30]}\nestimator = GridSearchCV(estimator, param_grid=param_dict, cv=5)\nestimator.fit(x_train,y_train)\n\n# 5.模型评估\n# 方法一:直接比对真实值\ny_predict = estimator.predict(x_test)\nprint(\"y_predict:\\n\", y_predict)\nprint(\"直接比对真实值和预测值:\\n\", y_test == y_predict)\n\n# 方法二:计算准确率\nscore = estimator.score(x_test, y_test)\nprint(\"准确率为:\\n\", score)\n\n# 最佳参数:best_params\nprint(\"最佳参数:\\n\", estimator.best_params_)\n# 最佳结果:best_score\nprint(\"最佳结果:\\n\", estimator.best_score_)\n# 最佳估计器:best_estimator\nprint(\"最佳估计器:\\n\", estimator.best_estimator_)\n# 交叉验证结果:cv_results\nprint(\"交叉验证结果:\\n\", estimator.cv_results_)",
"y_predict:\n [0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0\n 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 1 1 1 0 1 1 0 0 0 0 0 1 0 0 0\n 0 0 0 0 0 0 0 0 0 1 0 0 1 1 0 1 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1\n 0 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 1 1 0 0 0 0 0 0 0 1 0\n 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0\n 0 0 1 1 0 0 0 0 1 1 0 0 0 1 0 0 1 0 0 0 0 1 0 0 1 1 0 0 0 0 0 0 1 0 0 1 0\n 0 0 1 0 0 1 1 0 0 0 0 0 1 0 1 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0\n 1 1 0 1 0 0 0 1 1 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0\n 0 0 1 1 0 0 0 1 0 1 0 0 0 0 0 0 1 0 0 1 0 1 0 1 1 0 1 1 0 0 0 0 0]\n直接比对真实值和预测值:\n 975 True\n459 True\n1251 False\n14 True\n805 True\n ... \n1289 False\n706 True\n1138 True\n307 True\n1183 True\nName: survived, Length: 329, dtype: bool\n准确率为:\n 0.8449848024316109\n最佳参数:\n {'max_depth': 5, 'n_estimators': 500}\n最佳结果:\n 0.8242359888117683\n最佳估计器:\n RandomForestClassifier(max_depth=5, n_estimators=500)\n交叉验证结果:\n {'mean_fit_time': array([0.12398586, 0.19309268, 0.28763785, 0.47728109, 0.76535101,\n 1.14705458, 0.13746767, 0.22182374, 0.34388099, 0.56258745,\n 0.91270714, 1.37844424, 0.14997272, 0.24369354, 0.37512131,\n 0.62194362, 0.99407458, 1.48154645, 0.1468483 , 0.24390545,\n 0.3688664 , 0.61257076, 1.0022069 , 1.48467531, 0.15621719,\n 0.24701567, 0.37200975, 0.60944428, 0.9845715 , 1.50028753]), 'std_fit_time': array([4.39805987e-03, 9.36176882e-03, 7.91807402e-03, 6.94047193e-03,\n 6.03227844e-03, 7.30900430e-03, 6.24861719e-03, 6.24880799e-03,\n 4.27324759e-04, 4.58737406e-04, 1.22808608e-02, 1.15036985e-02,\n 7.64310369e-03, 7.65411077e-03, 9.88087928e-03, 6.36992050e-03,\n 1.29529843e-02, 1.15806573e-02, 7.64337497e-03, 7.84702975e-03,\n 7.81995377e-03, 6.15579090e-03, 1.26622645e-02, 9.53740765e-03,\n 1.59027873e-05, 5.82586425e-03, 6.37274950e-03, 1.71232321e-02,\n 9.55783100e-03, 2.80744931e-02]), 'mean_score_time': array([0.00598407, 0.01216693, 0.01874914, 0.04799552, 0.05312815,\n 0.0780858 , 0.00937285, 0.01270819, 0.0156261 , 0.03749514,\n 0.0593709 , 0.07811255, 0.0093648 , 0.01583495, 0.01874237,\n 0.03436213, 0.05935607, 0.10310092, 0.00624108, 0.01250038,\n 0.02499762, 0.0374918 , 0.05935936, 0.09060373, 0. ,\n 0.01562834, 0.02498598, 0.04062181, 0.05936346, 0.08436141]), 'std_score_time': array([4.92655457e-03, 6.11517713e-03, 6.24733339e-03, 3.07557128e-02,\n 6.88851812e-03, 1.74672162e-05, 7.65290345e-03, 6.36730272e-03,\n 7.35875768e-06, 7.65298999e-03, 6.25308791e-03, 1.29093478e-05,\n 7.64633707e-03, 4.26101951e-04, 6.25000679e-03, 6.25072146e-03,\n 6.26142954e-03, 2.72365548e-02, 7.64374351e-03, 6.25020385e-03,\n 7.65545455e-03, 7.65356532e-03, 6.24826058e-03, 6.24842648e-03,\n 0.00000000e+00, 1.35416716e-05, 7.65499083e-03, 7.65740707e-03,\n 6.24995394e-03, 7.64726684e-03]), 'param_max_depth': masked_array(data=[5, 5, 5, 5, 5, 5, 8, 8, 8, 8, 8, 8, 15, 15, 15, 15, 15,\n 15, 25, 25, 25, 25, 25, 25, 30, 30, 30, 30, 30, 30],\n mask=[False, False, False, False, False, False, False, False,\n False, False, False, False, False, False, False, False,\n False, False, False, False, False, False, False, False,\n False, False, False, False, False, False],\n fill_value='?',\n dtype=object), 'param_n_estimators': masked_array(data=[120, 200, 300, 500, 800, 1200, 120, 200, 300, 500, 800,\n 1200, 120, 200, 300, 500, 800, 1200, 120, 200, 300,\n 500, 800, 1200, 120, 200, 300, 500, 800, 1200],\n mask=[False, False, False, False, False, False, False, False,\n False, False, False, False, False, False, False, False,\n False, False, False, False, False, False, False, False,\n False, False, False, False, False, False],\n fill_value='?',\n dtype=object), 'params': [{'max_depth': 5, 'n_estimators': 120}, {'max_depth': 5, 'n_estimators': 200}, {'max_depth': 5, 'n_estimators': 300}, {'max_depth': 5, 'n_estimators': 500}, {'max_depth': 5, 'n_estimators': 800}, {'max_depth': 5, 'n_estimators': 1200}, {'max_depth': 8, 'n_estimators': 120}, {'max_depth': 8, 'n_estimators': 200}, {'max_depth': 8, 'n_estimators': 300}, {'max_depth': 8, 'n_estimators': 500}, {'max_depth': 8, 'n_estimators': 800}, {'max_depth': 8, 'n_estimators': 1200}, {'max_depth': 15, 'n_estimators': 120}, {'max_depth': 15, 'n_estimators': 200}, {'max_depth': 15, 'n_estimators': 300}, {'max_depth': 15, 'n_estimators': 500}, {'max_depth': 15, 'n_estimators': 800}, {'max_depth': 15, 'n_estimators': 1200}, {'max_depth': 25, 'n_estimators': 120}, {'max_depth': 25, 'n_estimators': 200}, {'max_depth': 25, 'n_estimators': 300}, {'max_depth': 25, 'n_estimators': 500}, {'max_depth': 25, 'n_estimators': 800}, {'max_depth': 25, 'n_estimators': 1200}, {'max_depth': 30, 'n_estimators': 120}, {'max_depth': 30, 'n_estimators': 200}, {'max_depth': 30, 'n_estimators': 300}, {'max_depth': 30, 'n_estimators': 500}, {'max_depth': 30, 'n_estimators': 800}, {'max_depth': 30, 'n_estimators': 1200}], 'split0_test_score': array([0.80203046, 0.8071066 , 0.80203046, 0.8071066 , 0.8071066 ,\n 0.8071066 , 0.78172589, 0.79187817, 0.78680203, 0.79187817,\n 0.78680203, 0.78680203, 0.77664975, 0.78680203, 0.78172589,\n 0.78680203, 0.78172589, 0.77664975, 0.78680203, 0.78172589,\n 0.78680203, 0.78172589, 0.78172589, 0.78172589, 0.77664975,\n 0.78680203, 0.78680203, 0.78172589, 0.78680203, 0.78172589]), 'split1_test_score': array([0.80203046, 0.80203046, 0.80203046, 0.80203046, 0.80203046,\n 0.80203046, 0.78680203, 0.79187817, 0.78680203, 0.78680203,\n 0.78680203, 0.78680203, 0.78680203, 0.78172589, 0.78680203,\n 0.78680203, 0.78680203, 0.78680203, 0.78172589, 0.78680203,\n 0.78680203, 0.78680203, 0.78680203, 0.78680203, 0.78680203,\n 0.78680203, 0.78680203, 0.78680203, 0.78680203, 0.78680203]), 'split2_test_score': array([0.83756345, 0.83756345, 0.84263959, 0.83756345, 0.83756345,\n 0.83248731, 0.81218274, 0.81218274, 0.81218274, 0.81218274,\n 0.81218274, 0.81218274, 0.80203046, 0.80203046, 0.80203046,\n 0.80203046, 0.8071066 , 0.8071066 , 0.8071066 , 0.80203046,\n 0.8071066 , 0.8071066 , 0.8071066 , 0.8071066 , 0.80203046,\n 0.80203046, 0.8071066 , 0.8071066 , 0.80203046, 0.8071066 ]), 'split3_test_score': array([0.79187817, 0.80203046, 0.79695431, 0.80203046, 0.79695431,\n 0.80203046, 0.80203046, 0.80203046, 0.8071066 , 0.80203046,\n 0.80203046, 0.80203046, 0.80203046, 0.80203046, 0.80203046,\n 0.8071066 , 0.80203046, 0.80203046, 0.78680203, 0.80203046,\n 0.79695431, 0.80203046, 0.80203046, 0.80203046, 0.79187817,\n 0.80203046, 0.8071066 , 0.8071066 , 0.79695431, 0.79695431]), 'split4_test_score': array([0.86734694, 0.86734694, 0.86734694, 0.87244898, 0.86734694,\n 0.87244898, 0.84693878, 0.84693878, 0.84693878, 0.84693878,\n 0.84183673, 0.83673469, 0.82142857, 0.81632653, 0.82653061,\n 0.83163265, 0.82653061, 0.83163265, 0.83673469, 0.83163265,\n 0.82142857, 0.82653061, 0.82653061, 0.83163265, 0.82653061,\n 0.83163265, 0.82142857, 0.82653061, 0.82653061, 0.82142857]), 'mean_test_score': array([0.8201699 , 0.82321558, 0.82220035, 0.82423599, 0.82220035,\n 0.82322076, 0.80593598, 0.80898166, 0.80796644, 0.80796644,\n 0.8059308 , 0.80491039, 0.79778825, 0.79778307, 0.79982389,\n 0.80287475, 0.80083912, 0.8008443 , 0.79983425, 0.8008443 ,\n 0.79981871, 0.80083912, 0.80083912, 0.80185953, 0.7967782 ,\n 0.80185953, 0.80184917, 0.80185435, 0.79982389, 0.79880348]), 'std_test_score': array([0.02823724, 0.02573152, 0.02795349, 0.02750164, 0.02663174,\n 0.0270814 , 0.02318509, 0.02041748, 0.02206003, 0.0213477 ,\n 0.02037329, 0.01859996, 0.01524723, 0.01231549, 0.0156211 ,\n 0.01650638, 0.01589408, 0.01882077, 0.02040647, 0.01739791,\n 0.01316945, 0.01589408, 0.01589408, 0.0175846 , 0.01697929,\n 0.01637042, 0.01335265, 0.01609856, 0.01459788, 0.0142824 ]), 'rank_test_score': array([ 6, 3, 4, 1, 4, 2, 10, 7, 8, 8, 11, 12, 28, 29, 24, 13, 20,\n 18, 23, 18, 26, 20, 20, 14, 30, 14, 17, 16, 24, 27])}\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0dbb503f85ea330a0fb01e14f033cad137d5ceb | 40,008 | ipynb | Jupyter Notebook | tfx/examples/chicago_taxi/chicago_taxi_tfma_local_playground.ipynb | yongsheng268/tfx | 6283fffb3ac81e2f213b4895fbe19623dfa9c4f5 | [
"Apache-2.0"
] | 2 | 2019-07-08T20:56:13.000Z | 2020-08-04T17:07:26.000Z | tfx/examples/chicago_taxi/chicago_taxi_tfma_local_playground.ipynb | yongsheng268/tfx | 6283fffb3ac81e2f213b4895fbe19623dfa9c4f5 | [
"Apache-2.0"
] | 15 | 2020-01-28T22:50:12.000Z | 2022-02-10T00:18:10.000Z | tfx/examples/chicago_taxi/chicago_taxi_tfma_local_playground.ipynb | yongsheng268/tfx | 6283fffb3ac81e2f213b4895fbe19623dfa9c4f5 | [
"Apache-2.0"
] | 1 | 2019-10-06T03:39:58.000Z | 2019-10-06T03:39:58.000Z | 35.881614 | 526 | 0.520196 | [
[
[
"## TFMA Notebook example\n\nThis notebook describes how to export your model for TFMA and demonstrates the analysis tooling it offers.\n\nNote: Please make sure to follow the instructions in [README.md](https://github.com/tensorflow/tfx/blob/master/tfx/examples/chicago_taxi/README.md) when running this notebook\n## Setup\n\nImport necessary packages.",
"_____no_output_____"
]
],
[
[
"import apache_beam as beam \nimport os\nimport preprocess\nimport shutil\nimport tensorflow as tf\nimport tensorflow_data_validation as tfdv\nimport tensorflow_model_analysis as tfma\nfrom google.protobuf import text_format \nfrom tensorflow.python.lib.io import file_io\nfrom tensorflow_transform.beam.tft_beam_io import transform_fn_io\nfrom tensorflow_transform.coders import example_proto_coder\nfrom tensorflow_transform.saved import saved_transform_io\nfrom tensorflow_transform.tf_metadata import dataset_schema\nfrom tensorflow_transform.tf_metadata import schema_utils\nfrom trainer import task\nfrom trainer import taxi",
"_____no_output_____"
]
],
[
[
"Helper functions and some constants for running the notebook locally.",
"_____no_output_____"
]
],
[
[
"BASE_DIR = os.getcwd()\n\nDATA_DIR = os.path.join(BASE_DIR, 'data')\n\nOUTPUT_DIR = os.path.join(BASE_DIR, 'chicago_taxi_output')\n\n# Base dir containing train and eval data\nTRAIN_DATA_DIR = os.path.join(DATA_DIR, 'train')\nEVAL_DATA_DIR = os.path.join(DATA_DIR, 'eval')\n\n# Base dir where TFT writes training data\nTFT_TRAIN_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_train')\nTFT_TRAIN_FILE_PREFIX = 'train_transformed'\n\n# Base dir where TFT writes eval data\nTFT_EVAL_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tft_eval')\nTFT_EVAL_FILE_PREFIX = 'eval_transformed'\n\nTF_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tf')\n\n# Base dir where TFMA writes eval data\nTFMA_OUTPUT_BASE_DIR = os.path.join(OUTPUT_DIR, 'tfma')\n\nSERVING_MODEL_DIR = 'serving_model_dir'\nEVAL_MODEL_DIR = 'eval_model_dir'\n\n\ndef get_tft_train_output_dir(run_id):\n return _get_output_dir(TFT_TRAIN_OUTPUT_BASE_DIR, run_id)\n\n\ndef get_tft_eval_output_dir(run_id):\n return _get_output_dir(TFT_EVAL_OUTPUT_BASE_DIR, run_id)\n\n\ndef get_tf_output_dir(run_id):\n return _get_output_dir(TF_OUTPUT_BASE_DIR, run_id)\n\ndef get_tfma_output_dir(run_id):\n return _get_output_dir(TFMA_OUTPUT_BASE_DIR, run_id)\n\ndef _get_output_dir(base_dir, run_id):\n return os.path.join(base_dir, 'run_' + str(run_id))\n\ndef get_schema_file():\n return os.path.join(OUTPUT_DIR, 'schema.pbtxt')\n",
"_____no_output_____"
]
],
[
[
"Clean up output directories.",
"_____no_output_____"
]
],
[
[
"shutil.rmtree(TFT_TRAIN_OUTPUT_BASE_DIR, ignore_errors=True)\nshutil.rmtree(TFT_EVAL_OUTPUT_BASE_DIR, ignore_errors=True)\nshutil.rmtree(TF_OUTPUT_BASE_DIR, ignore_errors=True)\nshutil.rmtree(get_schema_file(), ignore_errors=True)",
"_____no_output_____"
]
],
[
[
"## Compute and visualize descriptive data statistics",
"_____no_output_____"
]
],
[
[
"# Compute stats over training data.\ntrain_stats = tfdv.generate_statistics_from_csv(data_location=os.path.join(TRAIN_DATA_DIR, 'data.csv'))",
"_____no_output_____"
],
[
"# Visualize training data stats.\ntfdv.visualize_statistics(train_stats)",
"_____no_output_____"
]
],
[
[
"## Infer a schema",
"_____no_output_____"
]
],
[
[
"# Infer a schema from the training data stats.\nschema = tfdv.infer_schema(statistics=train_stats, infer_feature_shape=False)\ntfdv.display_schema(schema=schema)",
"_____no_output_____"
]
],
[
[
"## Check evaluation data for errors",
"_____no_output_____"
]
],
[
[
"# Compute stats over eval data.\neval_stats = tfdv.generate_statistics_from_csv(data_location=os.path.join(EVAL_DATA_DIR, 'data.csv'))",
"_____no_output_____"
],
[
"# Compare stats of eval data with training data.\ntfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,\n lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')",
"_____no_output_____"
],
[
"# Check eval data for errors by validating the eval data stats using the previously inferred schema.\nanomalies = tfdv.validate_statistics(statistics=eval_stats, schema=schema)\ntfdv.display_anomalies(anomalies)",
"_____no_output_____"
],
[
"# Update the schema based on the observed anomalies.\n\n# Relax the minimum fraction of values that must come from the domain for feature company.\ncompany = tfdv.get_feature(schema, 'company')\ncompany.distribution_constraints.min_domain_mass = 0.9\n\n# Add new value to the domain of feature payment_type.\npayment_type_domain = tfdv.get_domain(schema, 'payment_type')\npayment_type_domain.value.append('Prcard')\n\n# Validate eval stats after updating the schema \nupdated_anomalies = tfdv.validate_statistics(eval_stats, schema)\ntfdv.display_anomalies(updated_anomalies)",
"_____no_output_____"
]
],
[
[
"## Freeze the schema\n\nNow that the schema has been reviewed and curated, we will store it in a file to reflect its \"frozen\" state.",
"_____no_output_____"
]
],
[
[
"file_io.recursive_create_dir(OUTPUT_DIR)\nfile_io.write_string_to_file(get_schema_file(), text_format.MessageToString(schema))",
"_____no_output_____"
]
],
[
[
"## Preprocess Inputs\n\ntransform_data is defined in preprocess.py and uses the tensorflow_transform library to perform preprocessing. The same code is used for both local preprocessing in this notebook and preprocessing in the Cloud (via Dataflow).",
"_____no_output_____"
]
],
[
[
"# Transform eval data\npreprocess.transform_data(input_handle=os.path.join(EVAL_DATA_DIR, 'data.csv'),\n outfile_prefix=TFT_EVAL_FILE_PREFIX, \n working_dir=get_tft_eval_output_dir(0),\n schema_file=get_schema_file(),\n pipeline_args=['--runner=DirectRunner'])\nprint('Done')",
"_____no_output_____"
],
[
"# Transform training data\npreprocess.transform_data(input_handle=os.path.join(TRAIN_DATA_DIR, 'data.csv'),\n outfile_prefix=TFT_TRAIN_FILE_PREFIX, \n working_dir=get_tft_train_output_dir(0),\n schema_file=get_schema_file(),\n pipeline_args=['--runner=DirectRunner'])\nprint('Done')",
"_____no_output_____"
]
],
[
[
"## Compute statistics over transformed data ",
"_____no_output_____"
]
],
[
[
"# Compute stats over transformed training data.\nTRANSFORMED_TRAIN_DATA = os.path.join(get_tft_train_output_dir(0), TFT_TRAIN_FILE_PREFIX + \"*\") \ntransformed_train_stats = tfdv.generate_statistics_from_tfrecord(data_location=TRANSFORMED_TRAIN_DATA)",
"_____no_output_____"
],
[
"# Visualize transformed training data stats and compare to raw training data. \n# Use 'Feature search' to focus on a feature and see statistics pre- and post-transformation.\ntfdv.visualize_statistics(transformed_train_stats, train_stats, lhs_name='TRANSFORMED', rhs_name='RAW')",
"_____no_output_____"
]
],
[
[
"## Prepare the Model\n\nTo use TFMA, export the model into an **EvalSavedModel** by calling ``tfma.export.export_eval_savedmodel``.\n\n``tfma.export.export_eval_savedmodel`` is analogous to ``estimator.export_savedmodel`` but exports the evaluation graph as opposed to the training or inference graph. Notice that one of the inputs is ``eval_input_receiver_fn`` which is analogous to ``serving_input_receiver_fn`` for ``estimator.export_savedmodel``. For more details, refer to the documentation for TFMA on Github.\n\nContruct the **EvalSavedModel** after training is completed.",
"_____no_output_____"
]
],
[
[
"def run_experiment(hparams):\n \"\"\"Run the training and evaluate using the high level API\"\"\"\n\n # Train and evaluate the model as usual.\n estimator = task.train_and_maybe_evaluate(hparams)\n\n # Export TFMA's sepcial EvalSavedModel\n eval_model_dir = os.path.join(hparams.output_dir, EVAL_MODEL_DIR)\n receiver_fn = lambda: eval_input_receiver_fn(hparams.tf_transform_dir)\n\n tfma.export.export_eval_savedmodel(\n estimator=estimator,\n export_dir_base=eval_model_dir,\n eval_input_receiver_fn=receiver_fn)\n \ndef eval_input_receiver_fn(working_dir):\n # Extract feature spec from the schema.\n raw_feature_spec = schema_utils.schema_as_feature_spec(schema).feature_spec\n\n serialized_tf_example = tf.placeholder(\n dtype=tf.string, shape=[None], name='input_example_tensor')\n\n # First we deserialize our examples using the raw schema.\n features = tf.parse_example(serialized_tf_example, raw_feature_spec)\n\n # Now that we have our raw examples, we must process them through tft\n _, transformed_features = (\n saved_transform_io.partially_apply_saved_transform(\n os.path.join(working_dir, transform_fn_io.TRANSFORM_FN_DIR),\n features))\n\n # The key MUST be 'examples'.\n receiver_tensors = {'examples': serialized_tf_example}\n \n # NOTE: Model is driven by transformed features (since training works on the\n # materialized output of TFT, but slicing will happen on raw features.\n features.update(transformed_features)\n \n return tfma.export.EvalInputReceiver(\n features=features,\n receiver_tensors=receiver_tensors,\n labels=transformed_features[taxi.transformed_name(taxi.LABEL_KEY)])\n\nprint('Done')",
"_____no_output_____"
]
],
[
[
"## Train and export the model for TFMA",
"_____no_output_____"
]
],
[
[
"def run_local_experiment(tft_run_id, tf_run_id, num_layers, first_layer_size, scale_factor):\n \"\"\"Helper method to train and export the model for TFMA\n \n The caller specifies the input and output directory by providing run ids. The optional parameters\n allows the user to change the modelfor time series view.\n \n Args:\n tft_run_id: The run id for the preprocessing. Identifies the folder containing training data.\n tf_run_id: The run for this training run. Identify where the exported model will be written to.\n num_layers: The number of layers used by the hiden layer.\n first_layer_size: The size of the first hidden layer.\n scale_factor: The scale factor between each layer in in hidden layers.\n \"\"\"\n hparams = tf.contrib.training.HParams(\n # Inputs: are tf-transformed materialized features\n train_files=os.path.join(get_tft_train_output_dir(tft_run_id), TFT_TRAIN_FILE_PREFIX + '-00000-of-*'),\n eval_files=os.path.join(get_tft_eval_output_dir(tft_run_id), TFT_EVAL_FILE_PREFIX + '-00000-of-*'),\n schema_file=get_schema_file(),\n # Output: dir for trained model\n job_dir=get_tf_output_dir(tf_run_id),\n tf_transform_dir=get_tft_train_output_dir(tft_run_id),\n \n # Output: dir for both the serving model and eval_model which will go into tfma\n # evaluation\n output_dir=get_tf_output_dir(tf_run_id),\n train_steps=10000,\n eval_steps=5000,\n num_layers=num_layers,\n first_layer_size=first_layer_size,\n scale_factor=scale_factor,\n num_epochs=None,\n train_batch_size=40,\n eval_batch_size=40)\n\n run_experiment(hparams)\n\nprint('Done')",
"_____no_output_____"
],
[
"run_local_experiment(tft_run_id=0,\n tf_run_id=0,\n num_layers=4,\n first_layer_size=100,\n scale_factor=0.7)\nprint('Done')",
"_____no_output_____"
]
],
[
[
"## Run TFMA to compute metrics\nFor local analysis, TFMA offers a helper method ``tfma.run_model_analysis``",
"_____no_output_____"
]
],
[
[
"help(tfma.run_model_analysis)",
"_____no_output_____"
]
],
[
[
"#### You can also write your own custom pipeline if you want to perform extra transformations on the data before evaluation.",
"_____no_output_____"
]
],
[
[
"def run_tfma(slice_spec, tf_run_id, tfma_run_id, input_csv, schema_file, add_metrics_callbacks=None):\n \"\"\"A simple wrapper function that runs tfma locally.\n \n A function that does extra transformations on the data and then run model analysis.\n \n Args:\n slice_spec: The slicing spec for how to slice the data.\n tf_run_id: An id to contruct the model directories with.\n tfma_run_id: An id to construct output directories with.\n input_csv: The evaluation data in csv format.\n schema_file: The file holding a text-serialized schema for the input data.\n add_metrics_callback: Optional list of callbacks for computing extra metrics.\n \n Returns:\n An EvalResult that can be used with TFMA visualization functions.\n \"\"\"\n eval_model_base_dir = os.path.join(get_tf_output_dir(tf_run_id), EVAL_MODEL_DIR)\n eval_model_dir = os.path.join(eval_model_base_dir, next(os.walk(eval_model_base_dir))[1][0])\n eval_shared_model = tfma.default_eval_shared_model(\n eval_saved_model_path=eval_model_dir,\n add_metrics_callbacks=add_metrics_callbacks)\n schema = taxi.read_schema(schema_file)\n \n print(eval_model_dir)\n \n display_only_data_location = input_csv\n \n with beam.Pipeline() as pipeline:\n csv_coder = taxi.make_csv_coder(schema)\n raw_data = (\n pipeline\n | 'ReadFromText' >> beam.io.ReadFromText(\n input_csv,\n coder=beam.coders.BytesCoder(),\n skip_header_lines=True)\n | 'ParseCSV' >> beam.Map(csv_coder.decode))\n \n # Examples must be in clean tf-example format.\n coder = taxi.make_proto_coder(schema)\n raw_data = (\n raw_data\n | 'ToSerializedTFExample' >> beam.Map(coder.encode))\n\n _ = (raw_data\n | 'ExtractEvaluateAndWriteResults' >>\n tfma.ExtractEvaluateAndWriteResults(\n eval_shared_model=eval_shared_model,\n slice_spec=slice_spec,\n output_path=get_tfma_output_dir(tfma_run_id),\n display_only_data_location=input_csv))\n\n return tfma.load_eval_result(output_path=get_tfma_output_dir(tfma_run_id))\n \nprint('Done')",
"_____no_output_____"
]
],
[
[
"#### You can also compute metrics on slices of your data in TFMA. Slices can be specified using ``tfma.slicer.SingleSliceSpec``.\n\nBelow are examples of how slices can be specified.",
"_____no_output_____"
]
],
[
[
"# An empty slice spec means the overall slice, that is, the whole dataset.\nOVERALL_SLICE_SPEC = tfma.slicer.SingleSliceSpec()\n\n# Data can be sliced along a feature column\n# In this case, data is sliced along feature column trip_start_hour.\nFEATURE_COLUMN_SLICE_SPEC = tfma.slicer.SingleSliceSpec(columns=['trip_start_hour'])\n\n# Data can be sliced by crossing feature columns\n# In this case, slices are computed for trip_start_day x trip_start_month.\nFEATURE_COLUMN_CROSS_SPEC = tfma.slicer.SingleSliceSpec(columns=['trip_start_day', 'trip_start_month'])\n\n# Metrics can be computed for a particular feature value.\n# In this case, metrics is computed for all data where trip_start_hour is 12.\nFEATURE_VALUE_SPEC = tfma.slicer.SingleSliceSpec(features=[('trip_start_hour', 12)])\n\n# It is also possible to mix column cross and feature value cross.\n# In this case, data where trip_start_hour is 12 will be sliced by trip_start_day.\nCOLUMN_CROSS_VALUE_SPEC = tfma.slicer.SingleSliceSpec(columns=['trip_start_day'], features=[('trip_start_hour', 12)])\n\nALL_SPECS = [\n OVERALL_SLICE_SPEC,\n FEATURE_COLUMN_SLICE_SPEC, \n FEATURE_COLUMN_CROSS_SPEC, \n FEATURE_VALUE_SPEC, \n COLUMN_CROSS_VALUE_SPEC \n]",
"_____no_output_____"
]
],
[
[
"#### Let's run TFMA!",
"_____no_output_____"
]
],
[
[
"tf.logging.set_verbosity(tf.logging.INFO)\n\ntfma_result_1 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'), \n tf_run_id=0, \n tfma_run_id=1,\n slice_spec=ALL_SPECS,\n schema_file=get_schema_file())\nprint('Done')\n",
"_____no_output_____"
]
],
[
[
"## Visualization: Slicing Metrics\n\nTo see the slices, either use the name of the column (by setting slicing_column) or provide a tfma.slicer.SingleSliceSpec (by setting slicing_spec). If neither is provided, the overall will be displayed.\n\nThe default visualization is **slice overview** when the number of slices is small. It shows the value of a metric for each slice sorted by the another metric. It is also possible to set a threshold to filter out slices with smaller weights.\n\nThis view also supports **metrics histogram** as an alternative visualization. It is also the defautl view when the number of slices is large. The results will be divided into buckets and the number of slices / total weights / both can be visualized. Slices with small weights can be fitlered out by setting the threshold. Further filtering can be applied by dragging the grey band. To reset the range, double click the band. Filtering can be used to remove outliers in the visualization and the metrics table below.",
"_____no_output_____"
]
],
[
[
"# Show data sliced along feature column trip_start_hour.\ntfma.view.render_slicing_metrics(\n tfma_result_1, slicing_column='trip_start_hour')",
"_____no_output_____"
],
[
"# Show metrics sliced by COLUMN_CROSS_VALUE_SPEC above.\ntfma.view.render_slicing_metrics(tfma_result_1, slicing_spec=COLUMN_CROSS_VALUE_SPEC)",
"_____no_output_____"
],
[
"# Show overall metrics.\ntfma.view.render_slicing_metrics(tfma_result_1)",
"_____no_output_____"
]
],
[
[
"## Visualization: Plots\n\nTFMA offers a number of built-in plots. To see them, add them to ``add_metrics_callbacks``",
"_____no_output_____"
]
],
[
[
"tf.logging.set_verbosity(tf.logging.INFO)\n\ntfma_vis = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'), \n tf_run_id=0,\n tfma_run_id='vis',\n slice_spec=ALL_SPECS,\n schema_file=get_schema_file(),\n add_metrics_callbacks=[\n # calibration_plot_and_prediction_histogram computes calibration plot and prediction\n # distribution at different thresholds.\n tfma.post_export_metrics.calibration_plot_and_prediction_histogram(),\n # auc_plots enables precision-recall curve and ROC visualization at different thresholds.\n tfma.post_export_metrics.auc_plots()\n ])\n\nprint('Done')",
"_____no_output_____"
]
],
[
[
"Plots must be visualized for an individual slice. To specify a slice, use ``tfma.slicer.SingleSliceSpec``.\n\nIn the example below, we are using ``tfma.slicer.SingleSliceSpec(features=[('trip_start_hour', 1)])`` to specify the slice where trip_start_hour is 1.\n\nPlots are interactive:\n- Drag to pan\n- Scroll to zoom\n- Right click to reset the view\n\nSimply hover over the desired data point to see more details.",
"_____no_output_____"
]
],
[
[
"tfma.view.render_plot(tfma_vis, tfma.slicer.SingleSliceSpec(features=[('trip_start_hour', 1)]))",
"_____no_output_____"
]
],
[
[
"#### Custom metrics\n\nIn addition to plots, it is also possible to compute additional metrics not present at export time or custom metrics metrics using ``add_metrics_callbacks``.\n\nAll metrics in ``tf.metrics`` are supported in the callback and can be used to compose other metrics:\nhttps://www.tensorflow.org/api_docs/python/tf/metrics\n\nIn the cells below, false negative rate is computed as an example.",
"_____no_output_____"
]
],
[
[
"# Defines a callback that adds FNR to the result.\ndef add_fnr_for_threshold(threshold):\n def _add_fnr_callback(features_dict, predictions_dict, labels_dict):\n metric_ops = {}\n prediction_tensor = tf.cast(\n predictions_dict.get(tf.contrib.learn.PredictionKey.LOGISTIC), tf.float64)\n fn_value_op, fn_update_op = tf.metrics.false_negatives_at_thresholds(tf.squeeze(labels_dict), \n tf.squeeze(prediction_tensor), \n [threshold])\n tp_value_op, tp_update_op = tf.metrics.true_positives_at_thresholds(tf.squeeze(labels_dict), \n tf.squeeze(prediction_tensor), \n [threshold])\n fnr = fn_value_op[0] / (fn_value_op[0] + tp_value_op[0])\n metric_ops['FNR@' + str(threshold)] = (fnr, tf.group(fn_update_op, tp_update_op)) \n return metric_ops\n \n return _add_fnr_callback",
"_____no_output_____"
],
[
"tf.logging.set_verbosity(tf.logging.INFO)\n\ntfma_fnr = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'), \n tf_run_id=0,\n tfma_run_id='fnr',\n slice_spec=ALL_SPECS,\n schema_file=get_schema_file(),\n add_metrics_callbacks=[\n # Simply add the call here.\n add_fnr_for_threshold(0.75)\n ])\ntfma.view.render_slicing_metrics(tfma_fnr, slicing_spec=FEATURE_COLUMN_SLICE_SPEC)",
"_____no_output_____"
]
],
[
[
"## Visualization: Time Series\n\nIt is important to track how your model is doing over time. TFMA offers two modes to show your model performs over time.\n\n**Multiple model analysis** shows how model perfoms from one version to another. This is useful early on to see how the addition of new features, change in modeling technique, etc, affects the performance. TFMA offers a convenient method.",
"_____no_output_____"
]
],
[
[
"help(tfma.multiple_model_analysis)",
"_____no_output_____"
]
],
[
[
"**Multiple data analysis** shows how a model perfoms under different evaluation data set. This is useful to ensure that model performance does not degrade over time. TFMA offer a conveneient method.",
"_____no_output_____"
]
],
[
[
"help(tfma.multiple_data_analysis)",
"_____no_output_____"
]
],
[
[
"It is also possible to compose a time series manually.",
"_____no_output_____"
]
],
[
[
"# Create different models.\n\n# Run some experiments with different hidden layer configurations.\nrun_local_experiment(tft_run_id=0,\n tf_run_id=1,\n num_layers=3,\n first_layer_size=200,\n scale_factor=0.7)\n\nrun_local_experiment(tft_run_id=0,\n tf_run_id=2,\n num_layers=4,\n first_layer_size=240,\n scale_factor=0.5)\n\nprint('Done')",
"_____no_output_____"
],
[
"tfma_result_2 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'), \n tf_run_id=1, \n tfma_run_id=2, \n slice_spec=ALL_SPECS,\n schema_file=get_schema_file())\n\ntfma_result_3 = run_tfma(input_csv=os.path.join(EVAL_DATA_DIR, 'data.csv'), \n tf_run_id=2, \n tfma_run_id=3,\n slice_spec=ALL_SPECS,\n schema_file=get_schema_file())\nprint('Done')",
"_____no_output_____"
]
],
[
[
"Like plots, time series view must visualized for a slice too.\n\nIn the example below, we are showing the overall slice.\n\nSelect a metric to see its time series graph. Hover over each data point to get more details.",
"_____no_output_____"
]
],
[
[
"eval_results = tfma.make_eval_results([tfma_result_1, tfma_result_2, tfma_result_3], \n tfma.constants.MODEL_CENTRIC_MODE)\ntfma.view.render_time_series(eval_results, OVERALL_SLICE_SPEC)\n",
"_____no_output_____"
]
],
[
[
"Serialized results can also be used to construct a time series. Thus, there is no need to re-run TFMA for models already evaluated for a long running pipeline.",
"_____no_output_____"
]
],
[
[
"# Visualize the results in a Time Series. In this case, we are showing the slice specified.\neval_results_from_disk = tfma.load_eval_results([get_tfma_output_dir(1), \n get_tfma_output_dir(2), \n get_tfma_output_dir(3)], \n tfma.constants.MODEL_CENTRIC_MODE)\ntfma.view.render_time_series(eval_results_from_disk, FEATURE_VALUE_SPEC)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0dbc4386c8b75bb95074898fccf5ce9aca56a56 | 48,645 | ipynb | Jupyter Notebook | Notebook.ipynb | amaliestokholm/arxiv_on_deck | 1b22923c50f846bfcf8c4691c2fee8621e890794 | [
"MIT"
] | 1 | 2017-11-03T09:59:29.000Z | 2017-11-03T09:59:29.000Z | Notebook.ipynb | amaliestokholm/arxiv_on_deck | 1b22923c50f846bfcf8c4691c2fee8621e890794 | [
"MIT"
] | 1 | 2020-08-25T10:03:37.000Z | 2020-08-25T10:03:37.000Z | Notebook.ipynb | amaliestokholm/arxiv_on_deck | 1b22923c50f846bfcf8c4691c2fee8621e890794 | [
"MIT"
] | 3 | 2019-07-30T15:05:24.000Z | 2020-08-16T19:01:39.000Z | 52.932535 | 853 | 0.552431 | [
[
[
"# Run the scrapper\n\n%run mpia.py -d today",
"*** Matched author: Wel Arjen van der Wel\n*** Matched author: Wu Po-Feng Wu\n*** Matched author: Barisic Ivana Barisic\n*** Matched author: Chauke Priscilla Chauke\n*** Matched author: Houdt Josha van Houdt\n*** Matched author: Pillepich Annalisa Pillepich\n*** Matched author: Joshi Gandhali Joshi\n*** Matched author: Gould Andrew Gould\n*** Matched author: Martin Nicolas Martin\n*** Matched author: Wang Jason J. Wang\n*** Matched author: Zhu Zhaohuan Zhu\n*** Matched author: Walter Alex B. Walter\n*** Matched author: Soler J. D. Soler\n*** Matched author: Beuther H. Beuther\n*** Matched author: Rugel M. Rugel\n*** Matched author: Wang Y. Wang\n*** Matched author: Henning Th. Henning\n*** Matched author: Kainulainen J. Kainulainen\n*** Matched author: Mottram J. C. Mottram\n*** Matched author: Lee Eve J. Lee\n*** Matched author: Feldt M. Feldt\n*** Matched author: Cantalloube F. Cantalloube\n*** Matched author: Keppler M. Keppler\n*** Matched author: Maire A.-L. Maire\n*** Matched author: Mueller A. Mueller\n*** Matched author: Samland M. Samland\n*** Matched author: Henning T. Henning\n*** Matched author: Henning T. Henning\n*** Matched author: Wu Dong-Hong Wu\n*** Matched author: Zhang Rachel C. Zhang\n*** Matched author: Wang Shiang-Yu Wang\n*** Matched author: Wu Gang Wu\n*** Matched author: Bouwman Jordy Bouwman\n*** Matched author: Bouwman Jordy Bouwman\n*** Matched author: Avenhaus Henning Avenhaus\n*** Matched author: Bertrang Gesa H. -M. Bertrang\n*** Matched author: Schreiber Matthias R. Schreiber\n*** Matched author: Jordán Andrés Jordán\n*** Matched author: Espinoza Néstor Espinoza\n*** Matched author: Henning Thomas Henning\n*** Matched author: Rabus Markus Rabus\n*** Matched author: Sarkis Paula Sarkis\n*** Matched author: Ludwig H.-G. Ludwig\n*** Matched author: Zhang Ming-Jian Zhang\n*** Matched author: Martin R Martin\n*** Matched author: Bailer-Jones C.A.L. Bailer-Jones\n*** Matched author: Wang Ji Wang\n[arXiv:1809.08236]: The Large Early Galaxy Astrophysics Census (LEGA-C) Data Release II: dynamical and stellar population properties of z ~< 1 galaxies in the COSMOS field\n\tCaroline M. S. Straatman, \\hl{Arjen van der Wel}, Rachel Bezanson, Camilla Pacifici, Anna Gallazzi, \\hl{Po-Feng Wu}, Kai Noeske, \\hl{Ivana Barisic}, Eric F. Bell, Gabriel B. Brammer, Joao Calhau, \\hl{Priscilla Chauke}, Marijn Franx, \\hl{Josha van Houdt}, Ivo Labbe, Michael V. Maseda, Juan C. Munoz-Mateos, Adam Muzzin, Jesse van de Sande, David Sobral, Justin S. Spilker\nextracting tarball...\n*** Found macros and definitions in the header: \n\\providecommand{\\how}[1]{\\textcolor{purple}{#1}}\n\\providecommand{\\rA}{\\mathrm{\\AA}}\n\\providecommand{\\msun}{\\mathrm{M}_{\\sun}}\n\\providecommand{\\logm}{\\mathrm{logM/\\msun}}\n\\providecommand{\\ha}{H$\\alpha$}\n\\providecommand{\\hb}{H$\\beta$}\n\\providecommand{\\hg}{H$\\gamma$}\n\\providecommand{\\hd}{H$\\delta$}\n\\providecommand{\\dfn}{D$4000_n$}\n\\providecommand{\\hda}{\\hd$_\\mathrm{A}$}\n\\providecommand{\\mgt}{$\\mathrm{Mg_{2}}$}\n\\providecommand{\\nuse}{1442}\n\\providecommand{\\hblim}{$44.9\\times10^{-19}\\ \\mathrm{ergs\\ cm^{-2}\\ s^{-1}}$}\n\\providecommand{\\sfrlima}{$0.23\\ \\msun\\ \\mathrm{yr^{-1}}$}\n\\providecommand{\\sfrlimb}{$0.47\\ \\msun\\ \\mathrm{yr^{-1}}$}\n\\providecommand{\\sfrlimc}{$0.72\\ \\msun\\ \\mathrm{yr^{-1}}$}\n\\providecommand{\\medianz}{$z_{\\mathrm{spec}}=0.697$}\n\\providecommand{\\sfrlimfinal}{$2.2\\ \\msun\\ \\mathrm{yr^{-1}}$}\n\\providecommand{\\totn}{1988}\n\\providecommand{\\prims}{1550}\n\\providecommand{\\fills}{438}\n\n**** From Heidelberg: True\n\nCaroline M. S. Straatman, et al.; incl. \\hl{Arjen van der Wel}, \\hl{Po-Feng Wu}, \\hl{Ivana Barisic}, \\hl{Priscilla Chauke}, \\hl{Josha van Houdt}\nPDF postage: 1809.08236.pdf\n[arXiv:1809.08239]: The optical morphologies of galaxies in the IllustrisTNG simulation: a comparison to Pan-STARRS observations\n\tVicente Rodriguez-Gomez, Gregory F. Snyder, Jennifer M. Lotz, Dylan Nelson, \\hl{Annalisa Pillepich}, Volker Springel, Shy Genel, Rainer Weinberger, Sandro Tacchella, Ruediger Pakmor, Paul Torrey, Federico Marinacci, Mark Vogelsberger, Lars Hernquist, David A. Thilker\nextracting tarball...\n*** Found macros and definitions in the header: \n\\providecommand{\\Msun}{{\\rm M}_{\\odot}}\n\\providecommand{\\facc}{f_{\\rm acc}}\n\\providecommand{\\fex}{f_{\\rm ex}}\n\\providecommand{\\krot}{\\kappa_{\\rm rot}}\n\n**** From Heidelberg: True\n\nVicente Rodriguez-Gomez, et al.; incl. \\hl{Annalisa Pillepich}\nPDF postage: 1809.08239.pdf\n[arXiv:1809.08241]: Wide-Field Optical Spectroscopy of Abell 133: A Search for Filaments Reported in X-ray Observations\n\tThomas Connor, Daniel D. Kelson, John Mulchaey, Alexey Vikhlinin, Shannon G. Patel, Michael L. Balogh, \\hl{Gandhali Joshi}, Ralph Kraft, Daisuke Nagai, Svetlana Starikova\nextracting tarball...\n*** Found macros and definitions in the header: \n\\providecommand{\\vdag}{(v)^\\dagger}\\graphicspath{{./}{figures/}}\n\n**** From Heidelberg: True\n\nThomas Connor, et al.; incl. \\hl{Gandhali Joshi}\nPDF postage: 1809.08241.pdf\n[arXiv:1809.08243]: First Resolution of Microlensed Images\n\tSubo Dong, (KIAA-PKU), , A. Mérand, F. Delplancke-Ströbele, (ESO), , \\hl{Andrew Gould}, (MPIA, KASI, OSU), , Ping Chen, R. Post, C.S. Kochanek, K. Z. Stanek, G. W. Christie, Robert Mutel, T. Natusch, T. W.-S. Holoien, J. L. Prieto, B. J. Shappee, Todd A. Thompson\nextracting tarball...\n*** Found macros and definitions in the header: \n\\providecommand{\\msun}{{\\rm \\ M_\\odot}}\n\\providecommand{\\bdv}[1]{\\mbox{\\boldmath$#1$}}\n\\providecommand{\\bd}[1]{{\\rm #1}}\\def\\au{{\\rm AU}} \n\\def\\sinc{{\\rm sinc}} \n\\def\\kms{{\\rm km}\\,{\\rm s}^{-1}}\n\\def\\masyr{{\\rm mas}\\,{\\rm yr}^{-1}}\n\\def\\kpc{{\\rm kpc}}\n\\def\\mas{{\\rm mas}}\n\\def\\sat{{\\rm sat}}\n\\def\\muas{\\mu{\\rm as}}\n\\def\\var{{\\rm var}}\n\\def\\pc{{\\rm pc}}\n\\def\\orb{{\\rm orb}}\n\\def\\obs{{\\rm obs}}\n\\def\\max{{\\rm max}}\n\\def\\min{{\\rm min}}\n\\def\\rel{{\\rm rel}}\n\\def\\ast{{\\rm ast}}\n\\def\\eff{{\\rm eff}}\n\\def\\rot{{\\rm rot}}\n\\def\\lsr{{\\rm lsr}}\n\\def\\hel{{\\rm hel}}\n\\def\\geo{{\\rm geo}}\n\\def\\e{{\\rm E}}\n\\def\\bpi{{\\bdv\\pi}}\n\\def\\bmu{{\\bdv\\mu}}\n\\def\\balpha{{\\bdv\\alpha}}\n\\def\\bgamma{{\\bdv\\gamma}}\n\\def\\bDelta{{\\bdv\\Delta}}\n\\def\\btheta{{\\bdv\\theta}}\n\\def\\bphi{{\\bdv\\phi}}\n\\def\\bp{{\\bf p}}\n\\def\\bv{{\\bf v}}\n\\def\\bu{{\\bf u}}\n\\def\\naive{{\\rm naive}}\n\\def\\revise{\\bf}\n\n**** From Heidelberg: True\n\nSubo Dong, et al.; incl. \\hl{Andrew Gould}\nPDF postage: 1809.08243.pdf\n[arXiv:1809.08245]: A-type stars in the Canada-France Imaging Survey I. The stellar halo of the Milky Way traced to large radius by blue horizontal branch stars\n\tGuillaume F. Thomas, Alan W. McConnachie, Rodrigo A. Ibata, Patrick Côté, \\hl{Nicolas Martin}, Else Starkenburg, Raymond Carlberg, Scott Chapman, Sébastien Fabbro, Benoit Famaey, Nicholas Fantin, Stephen Gwyn, Vincent Hénault-Brunet, Khyati Malhan, Julio Navarro, Annie C. Robin, Douglas Scott\nextracting tarball...\n*** Found macros and definitions in the header: \n\\def\\ltsima{$\\; \\buildrel < \\over \\sim \\;$}\n\\def\\simlt{\\lower.5ex\\hbox{\\ltsima}}\n\\def\\gtsima{$\\; \\buildrel > \\over \\sim \\;$}\n\\def\\simgt{\\lower.5ex\\hbox{\\gtsima}}\n\n**** From Heidelberg: True\n\nGuillaume F. Thomas, et al.; incl. \\hl{Nicolas Martin}\nPDF postage: 1809.08245.pdf\n[arXiv:1809.08261]: A Bayesian Framework for Exoplanet Direct Detection and Non-Detection\n\tJean-Baptiste Ruffio, Dimitri Mawet, Ian Czekala, Bruce Macintosh, Robert J. De Rosa, Garreth Ruane, Michael Bottom, Laurent Pueyo, \\hl{Jason J. Wang}, Lea Hirsch, \\hl{Zhaohuan Zhu}, Eric L. Nielsen\nextracting tarball...\nmultiple tex files\nFound main document in: (0, './tmp/sample62.tex')\nFound main document in: ./tmp/sample62.tex\n0 ./tmp/sample62.tex\nFound main document in: ./tmp/sample62.tex\n*** Found document inclusions \n input command: content\n*** Found macros and definitions in the header: \n\\providecommand{\\vdag}{(v)^\\dagger}\n\\providecommand{\\Secref}[1]{\\hyperref[#1]{Section~\\ref*{#1}}}\n\\providecommand{\\Appref}[1]{\\hyperref[#1]{Appendix~\\ref*{#1}}}\n\\providecommand{\\epseri}{$\\epsilon$ Eridani}\\graphicspath{{./}{figures/}}\n\n**** From Heidelberg: False\n\n*** print_tb\n File \"/home/jovyan/mpia.py\", line 160, in main\n raise RuntimeError('Not an institute paper')\nNot an institute paper \n\n[arXiv:1809.08301]: SCExAO, an instrument with a dual purpose: perform cutting-edge science and develop new technologies\n\tJulien Lozi, Olivier Guyon, Nemanja Jovanovic, Sean Goebel, Prashant Pathak, Nour Skaf, Ananya Sahoo, Barnaby Norris, Frantz Martinache, Mamadou N'Diaye, Ben Mazin, \\hl{Alex B. Walter}, Peter Tuthill, Tomoyuki Kudo, Hajime Kawahara, Takayuki Kotani, Michael Ireland, Nick Cvetojevic, Elsa Huby, Sylvestre Lacour, Sebastien Vievard, Tyler D. Groff, Jeffrey K. Chilcote, Jeremy Kasdin, Justin Knight, Frans Snik, David Doelman, Yosuke Minowa, Christophe Clergeon, Naruhisa Takato, Motohide Tamura, Thayne Currie, Hideki Takami, Masa Hayashi\nextracting tarball...\n*** Found macros and definitions in the header: \n\\providecommand{\\baselinestretch}{1.0}\n\\providecommand{\\mum}{\\mbox{{\\usefont{U}{eur}{m}{n}{\\char22}}m}\\xspace}\n\\providecommand{\\mus}{\\mbox{{\\usefont{U}{eur}{m}{n}{\\char22}}s}\\xspace}\n\\providecommand{\\sce}{\\mbox{SCE\\lowercase{x}AO}\\xspace}\n\\providecommand{\\e}[1]{10^{#1}}\n\\providecommand{\\E}[1]{\\times10^{#1}}\n\\providecommand{\\lod}{\\mbox{$\\lambda$/D}\\xspace}\n\\providecommand{\\FIG}[3]{\\includegraphics[width=#1\\linewidth,draft=#2]{#3}}\n\\providecommand{\\FIGH}[3]{\\includegraphics[height=#1cm,draft=#2]{#3}}\n\n**** From Heidelberg: False\n\n*** print_tb\n File \"/home/jovyan/mpia.py\", line 160, in main\n raise RuntimeError('Not an institute paper')\nNot an institute paper \n\n[arXiv:1809.08338]: Histogram of oriented gradients: a technique for the study of molecular cloud formation\n\t\\hl{J. D. Soler}, \\hl{H. Beuther}, \\hl{M. Rugel}, \\hl{Y. Wang}, L. D. Anderson, P. C. Clark, S. C. O. Glover, P. F. Goldsmith, A. Goodman, P. Hennebelle, \\hl{Th. Henning}, M. Heyer, \\hl{J. Kainulainen}, R. S. Klessen, N. M. McClure-Griffiths, K. M. Menten, \\hl{J. C. Mottram}, S. E. Ragan, P. Schilke, R. J. Smith, J. S. Urquhart, F. Bigiel, N. Roy\nextracting tarball...\nmultiple tex files\nFound main document in: (0, './tmp/HIandCO.tex')\nFound main document in: ./tmp/HIandCO.tex\n0 ./tmp/HIandCO.tex\nFound main document in: ./tmp/HIandCO.tex\n*** Found document inclusions \n input command: PIP_113_Boulanger_authors_and_institutes\n*** print_tb\n File \"/home/jovyan/app.py\", line 847, in _expand_auxilary_files\n with open(directory + fname + '.tex', 'r', errors=\"surrogateescape\") as fauxilary:\n[Errno 2] No such file or directory: './tmp/PIP_113_Boulanger_authors_and_institutes.tex' \n\n*** Found macros and definitions in the header: \n\\providecommand{\\henrik}[1]{{\\bf \\color{green} [#1]}}\n\\providecommand{\\hhenrik}[1]{}\n\\providecommand{\\juan}[1]{{\\bf \\color{red} #1}}\n\\providecommand{\\commentproof}[1]{{\\bf \\color{green}#1}}\n\\providecommand{\\commentproof}[1]{}\n\\providecommand{\\planck}{\\Planck} \\def\\Herschel{\\textit{Herschetowardl}}\n\\providecommand{\\nh}{$N_{\\textsc{H}}$}\n\\providecommand{\\nhd}{N_{\\textsc{H}}} \n\\providecommand{\\gradnh}{$\\mathbf{\\nabla}N_{\\textsc{H}}$}\n\\providecommand{\\lognh}{$\\log_{10}(N_{\\textsc{H}}/\\mbox{cm}^{-2})$}\n\\providecommand{\\microG}{$\\mu$G}\n\\providecommand{\\bcf}{$B^{\\textsc{DCF}}_{\\perp}$}\n\\providecommand{\\bhilde}{$B^{\\textsc{\\HIL}}_{\\perp}$}\n\\providecommand{\\IRAS}{\\textit{IRAS\\/}}\n\\providecommand{\\WMAP}{\\textit{WMAP\\/}}\n\\providecommand{\\COBE}{\\textit{COBE\\/}}\n\\providecommand{\\Spitzer}{\\textit{Spitzer\\/}}\n\\providecommand{\\healpix}{{\\sc HEALPix}}\n\\providecommand{\\sextractor}{{\\sc SExtractor}}\n\\providecommand{\\hii}{\\ion{H}{II}}\n\\providecommand{\\viewangle}{\\alpha}\n\\providecommand{\\bvect}{\\vec{B}}\n\\providecommand{\\planckurl}{\\burl{http://www.rssd.esa.int/index.php?project=PLANCK&page=Planck_Collaboration}}\n\\providecommand{\\sorthelp}[1]{}\n\\providecommand{\\bperp}{$\\langle\\hat{\\vec{B}}_{\\perp}\\rangle$}\n\\providecommand{\\wc}{{\\mkern 2mu\\cdot\\mkern 2mu}}\n\\providecommand{\\prs}{$V$}\n\\providecommand{\\mrv}{$r$}\n\\providecommand{\\kps}{km\\,s$^{-1}$}\n\\providecommand{\\vhi}{$v_{\\rm HI}$}\n\\providecommand{\\vco}{$v_{\\rm 13CO}$}\n\\providecommand{\\vlsr}{$v_{\\rm LSR}$}\n\\providecommand{\\vlos}{$v_{\\rm LOS}$}\\def\\bfc{}\n\\def\\bfc{\\bf}\n\\def\\bfm{\\bf \\color{magenta}}\n\\def\\bfm{} \\newcommand{\\commentproof}[1]{{\\bf \\color{green}#1}}\n\\def\\Herschel{\\textit{Herschetowardl}}\n\n**** From Heidelberg: True\n\n\\hl{J. D. Soler}, et al.; incl. \\hl{H. Beuther}, \\hl{M. Rugel}, \\hl{Y. Wang}, \\hl{Th. Henning}, \\hl{J. Kainulainen}, \\hl{J. C. Mottram}\nPDF postage: 1809.08338.pdf\n[arXiv:1809.08348]: On The Nature of Variations in the Measured Star Formation Efficiency of Molecular Clouds\n\tMichael Y. Grudić, Philip F. Hopkins, \\hl{Eve J. Lee}, Norman Murray, Claude-André Faucher-Giguère, L. Clifton Johnson\nextracting tarball...\n*** Found macros and definitions in the header: \n\\providecommand{\\msun}{M_{\\sun}}\n\\providecommand{\\oft}{\\left(t\\right)}\\defcitealias{grudic:2016.sfe}{G18}\n\\defcitealias{lee:2016.gmc.eff}{L+16}\n\\defcitealias{vuti:2016.gmcs}{V+16}\n\\defcitealias{heyer:2016.clumps}{H+16}\n\\defcitealias{wu:2010.clumps}{W+10}\n\\defcitealias{evans:2014.sfe}{E+14}\n\\defcitealias{lada:2010.gmcs}{L+10}\n\n**** From Heidelberg: False\n\n*** print_tb\n File \"/home/jovyan/mpia.py\", line 160, in main\n raise RuntimeError('Not an institute paper')\nNot an institute paper \n\n[arXiv:1809.08354]: beta Pictoris b post conjunction detection with VLT/SPHERE\n\tA.-M. Lagrange, A. Boccaletti, M. Langlois, G. Chauvin, R. Gratton, H. Beust, S. Desidera, J. Milli, M. Bonnefoy, \\hl{M. Feldt}, M. Meyer, A. Vigan, B. Biller, M. Bonavita, J.-L. Baudino, \\hl{F. Cantalloube}, M. Cudel, S. Daemgen, P. Delorme, V. DOrazi, J. Girard, C. Fontanive, J. Hagelberg, M. Janson, \\hl{M. Keppler}, T. Koypitova, R. Galicher, J. Lannier, H. Le Coroller, R. Ligi, \\hl{A.-L. Maire}, D. Mesa, S. Messina, \\hl{A. Mueller}, S. Peretti, C. Perrot, D. Rouan, G. Salter, \\hl{M. Samland}, T. Schmidt, E. Sissa, A. Zurlo, J.-L. Beuzit, D. Mouillet, C. Dominik, \\hl{T. Henning}, E. Lagadec, F. Menard, H.-M. Schmid, S. Udry, , the , SPHERE consortium\nextracting tarball...\n*** Found macros and definitions in the header: \n\\providecommand{\\eg}{{\\it e.g.}, } \n\\providecommand{\\ie}{{\\it i.e.}, } \n\\providecommand{\\ms}{m\\,s$^{\\rm -1}$}\n\\providecommand{\\kms}{km\\,s$^{\\rm -1}$}\n\\providecommand{\\Mjup}{M$_{\\rm Jup}$}\n\\providecommand{\\mjup}{M$_{\\rm Jup}$}\n\\providecommand{\\mearth}{M$_{\\rm Earth}$} \n\\providecommand{\\Msun}{M$_{\\sun}$} \n\\providecommand{\\vsini}{$v\\sin{i}$}\n\\providecommand{\\elodie}{E{\\small LODIE}} \n\\providecommand{\\sophie}{S{\\small OPHIE}} \n\\providecommand{\\harps}{H{\\small ARPS}}\n\\providecommand{\\thetacyg}{$\\theta$\\,Cygni} \n\\providecommand{\\bp}{$\\beta$\\,Pictoris\\,} \n\\providecommand{\\bpic}{$\\beta$\\,Pictoris\\,}\n\\providecommand{\\acena}{$\\alpha$\\,CenA\\,} \n\\providecommand{\\acenb}{$\\alpha$\\,CenB\\,} \n\\providecommand{\\plmo}{$^{+}_{-} $}\n\\providecommand{\\muup}{$\\mu$m}\n\n**** From Heidelberg: True\n\nA.-M. Lagrange, et al.; incl. \\hl{M. Feldt}, \\hl{F. Cantalloube}, \\hl{M. Keppler}, \\hl{A.-L. Maire}, \\hl{A. Mueller}, \\hl{M. Samland}, \\hl{T. Henning}\nPDF postage: 1809.08354.pdf\n[arXiv:1809.08385]: Properties and occurrence rates of $Kepler$ exoplanet candidates as a function of host star metallicity from the DR25 catalog\n\tM. Narang, (TIFR), , P. Manoj, (TIFR), , E. Furlan, (IPAC), , C. Mordasini, (Physikalisches Institut, Univ. of Bern), , \\hl{T. Henning}, (MPIA), , B. Mathew, (Christ Univ.), , R. K. Banyal, (IIA), , T. Sivarani, (IIA)\nextracting tarball...\n*** Found macros and definitions in the header: \n\\def\\teff {{$T_\\mathrm{eff}$ }}\n\\def\\Re {{$\\,R_\\oplus$ }}\n\\def\\Me {{$\\,M_\\oplus$ }}\n\\def\\Rj {{$\\,R_J$ }}\n\\def\\Mj {{$\\,M_\\mathrm{J}$ }}\n\\def\\lg {{log$\\,g$ }}\n\\def\\pl {{planetary }}\n\\def\\plr {{planetary radius }}\n\\def\\plm {{planetary mass }}\n\\def \\hsm {{host star metallicity }}\n\\def \\hs {{host star }}\n\\def \\sc {{SWEET-Cat }}\n\n**** From Heidelberg: True\n\nM. Narang, et al.; incl. \\hl{T. Henning}\nPDF postage: 1809.08385.pdf\n[arXiv:1809.08499]: Dynamical instability and its implications for planetary system architecture\n\t\\hl{Dong-Hong Wu}, \\hl{Rachel C. Zhang}, Ji-Lin Zhou, Jason H. Steffen\nextracting tarball...\n*** Found macros and definitions in the header: \n\\providecommand{\\kepler}{\\textit{Kepler}}\\graphicspath{{./}{figures/}}\n\n**** From Heidelberg: False\n\n*** print_tb\n File \"/home/jovyan/mpia.py\", line 160, in main\n raise RuntimeError('Not an institute paper')\nNot an institute paper \n\n[arXiv:1809.08501]: Col-OSSOS: The Colours of the Outer Solar System Origins Survey\n\tMegan E. Schwamb, Wesley C. Fraser, Michele T. Bannister, Michael Marsset, Rosemary E. Pike, J. J. Kavelaars, Susan D. Benecchi, Matthew J. Lehner, \\hl{Shiang-Yu Wang}, Audrey Thirouin, Audrey Delsanti, Nuno Peixinho, Kathryn Volk, Mike Alexandersen, Ying-Tung Chen, Brett Gladman, Stephen D. J. Gwyn, Jean-Marc Petit\nextracting tarball...\n*** Found macros and definitions in the header: \n\n\n**** From Heidelberg: False\n\n*** print_tb\n File \"/home/jovyan/mpia.py\", line 160, in main\n raise RuntimeError('Not an institute paper')\nNot an institute paper \n\n[arXiv:1809.08739]: High-mass outflows identified from COHRS CO\\,(3 - 2) Survey\n\tQiang Li, Jianjun Zhou, Jarken Esimbek, Yuxin He, W. A. Baan, Dalei Li, \\hl{Gang Wu}, Xindi Tang, Weiguang Ji\nextracting tarball...\n*** Found macros and definitions in the header: \n\\providecommand{\\vdag}{(v)^\\dagger}\n\\providecommand{\\RNum}[1]{\\uppercase\\expandafter{\\romannumeral #1\\relax}}\n\n**** From Heidelberg: False\n\n*** print_tb\n File \"/home/jovyan/mpia.py\", line 160, in main\n raise RuntimeError('Not an institute paper')\nNot an institute paper \n\n[arXiv:1809.08798]: Infrared Spectra of Hexa-peri-hexabenzocoronene Cations:HBC+ and HBC2+\n\tJunfeng Zhen, Pablo Castellanos, \\hl{Jordy Bouwman}, Harold Linnartz, Alexander G. G. M. Tielens\nextracting tarball...\n*** Found macros and definitions in the header: \n\n\n**** From Heidelberg: False\n\n*** print_tb\n File \"/home/jovyan/mpia.py\", line 160, in main\n raise RuntimeError('Not an institute paper')\nNot an institute paper \n\n[arXiv:1809.08800]: Laboratory gas-phase infrared spectra of two astronomically relevant PAH cations: diindenoperylene, C$_{32}$H$_{16}$$^+$ and dicoronylene, C$_{48}$H$_{20}$$^+$\n\tJunfeng Zhen, Alessandra Candian, Pablo Castellanos, \\hl{Jordy Bouwman}, Harold Linnartz, Alexander G. G. M. Tielens\n*** print_tb\n File \"/home/jovyan/mpia.py\", line 155, in main\n s = paper.retrieve_document_source('./tmp')\n File \"/home/jovyan/app.py\", line 1092, in retrieve_document_source\n tar = tarfile.open(mode='r|gz', fileobj=urlopen(where))\n File \"/srv/conda/lib/python3.6/tarfile.py\", line 1597, in open\n stream = _Stream(name, filemode, comptype, fileobj, bufsize)\n File \"/srv/conda/lib/python3.6/tarfile.py\", line 379, in __init__\n self._init_read_gz()\n File \"/srv/conda/lib/python3.6/tarfile.py\", line 484, in _init_read_gz\n raise ReadError(\"not a gzip file\")\nnot a gzip file \n\n[arXiv:1809.08844]: The Ophiuchus DIsc Survey Employing ALMA (ODISEA) - I : project description and continuum images at 28 au resolution\n\tLucas A. Cieza, Dary Ruíz-Rodríguez, Antonio Hales, Simon Casassus, Sebastian Pérez, Camilo Gonzalez-Ruilova, Hector Cánovas, Jonathan P. Williams, Alice Zurlo, Megan Ansdell, \\hl{Henning Avenhaus}, Amelia Bayo, \\hl{Gesa H. -M. Bertrang}, Valentin Christiaens, William Dent, Gabriel Ferrero, Roberto Gamen, Johan Olofsson, Santiago Orcajo, Karla Peña Ramírez, David Principe, \\hl{Matthias R. Schreiber}, Gerrit van der Plas\nextracting tarball...\n*** Found macros and definitions in the header: \n\n\n**** From Heidelberg: True\n\nLucas A. Cieza, et al.; incl. \\hl{Henning Avenhaus}, \\hl{Gesa H. -M. Bertrang}, \\hl{Matthias R. Schreiber}\nPDF postage: 1809.08844.pdf\n[arXiv:1809.08879]: EPIC 249451861b: an Eccentric Warm Saturn transiting a G-dwarf\n\t\\hl{Andrés Jordán}, Rafael Brahm, \\hl{Néstor Espinoza}, Cristián Cortés, Matías Díaz, Holger Drass, \\hl{Thomas Henning}, James S. Jenkins, Matías I. Jones, \\hl{Markus Rabus}, Felipe Rojas, \\hl{Paula Sarkis}, Maja Vučković, Abner Zapata, Maritza G. Soto, Gáspár Á. Bakos, Daniel Bayliss, Waqas Bhatti, Zoltan Csubry, Régis Lachaume, Víctor Moraga, Blake Pantoja, David Osip, Avi Shporer, Vincent Suc, Sergio Vásquez\nextracting tarball...\n*** Found macros and definitions in the header: \n\\providecommand{\\vdag}{(v)^\\dagger}\n\\providecommand{\\feh}{\\ensuremath{{\\rm [Fe/H]}}}\n\\providecommand{\\teff}{\\ensuremath{T_{\\rm eff}}}\n\\providecommand{\\teq}{\\ensuremath{T_{\\rm eq}}}\n\\providecommand{\\logg}{\\ensuremath{\\log{g}}}\n\\providecommand{\\zaspe}{\\texttt{ZASPE}}\n\\providecommand{\\ceres}{\\texttt{CERES}}\n\\providecommand{\\vsini}{\\ensuremath{v \\sin{i}}}\n\\providecommand{\\kms}{\\ensuremath{{\\rm km\\,s^{-1}}}}\n\\providecommand{\\mjup}{\\ensuremath{{\\rm M_{J}}}}\n\\providecommand{\\mearth}{\\ensuremath{{\\rm M}_{\\oplus}}}\n\\providecommand{\\mpl}{\\ensuremath{{\\rm M_P}}}\n\\providecommand{\\rjup}{\\ensuremath{{\\rm R_J}}}\n\\providecommand{\\rpl}{\\ensuremath{{\\rm R_P}}}\n\\providecommand{\\rstar}{\\ensuremath{{\\rm R}_{\\star}}}\n\\providecommand{\\mstar}{\\ensuremath{{\\rm M}_{\\star}}}\n\\providecommand{\\lstar}{\\ensuremath{{\\rm L}_{\\star}}}\n\\providecommand{\\rsun}{\\ensuremath{{\\rm R}_{\\odot}}}\n\\providecommand{\\msun}{\\ensuremath{{\\rm M}_{\\odot}}}\n\\providecommand{\\lsun}{\\ensuremath{{\\rm L}_{\\odot}}}\n\\providecommand{\\mpkep}{\\ensuremath{0.315 \\pm 0.027 }}\n\\providecommand{\\rpkep}{\\ensuremath{0.847 \\pm 0.013 }}\n\\providecommand{\\mskep}{\\ensuremath{1.049_{-0.029}^{+0.021} }}\n\\providecommand{\\rskep}{\\ensuremath{1.085 \\pm 0.010 }}\n\\providecommand{\\per}{\\ensuremath{14.893291 \\pm 0.000025 }}\n\\providecommand{\\ecc}{\\ensuremath{0.478 \\pm 0.026 }}\n\\providecommand{\\sma}{\\ensuremath{0.1204_{-0.0011}^{0.0008} }}\n\\providecommand{\\plname}{EPIC~249451861b}\n\\providecommand{\\stname}{EPIC~249451861}\n\\providecommand{\\rhopl}{\\ensuremath{{\\rm \\rho_P}}}\n\\providecommand{\\rhopkep}{\\ensuremath{1.154 \\pm 0.045 }}\n\\providecommand{\\gccm}{\\ensuremath{\\mathrm{g}\\,\\mathrm{cm}^{-3}}}\\graphicspath{{./}{figures/}}\n\n**** From Heidelberg: True\n\n\\hl{Andrés Jordán}, et al.; incl. \\hl{Néstor Espinoza}, \\hl{Thomas Henning}, \\hl{Markus Rabus}, \\hl{Paula Sarkis}\nPDF postage: 1809.08879.pdf\n[arXiv:1809.08904]: Influence of metallicity on the near-surface effect affecting oscillation frequencies\n\tL. Manchon, K. Belkacem, R. Samadi, T. Sonoi, J. P. C. Marques, \\hl{H.-G. Ludwig}, E. Caffau\nextracting tarball...\n*** Found macros and definitions in the header: \n\n\n**** From Heidelberg: False\n\n*** print_tb\n File \"/home/jovyan/mpia.py\", line 160, in main\n raise RuntimeError('Not an institute paper')\nNot an institute paper \n\n[arXiv:1806.02981]: Observational constraint on the dark energy scalar field\n\t\\hl{Ming-Jian Zhang}, Hong Li\nextracting tarball...\n*** Found macros and definitions in the header: \n\\def\\bea{\\begin{eqnarray}}\n\\def\\eea{\\end{eqnarray}}\n\\def\\be{\\begin{equation}}\n\\def\\ee{\\end{equation}}\n\n**** From Heidelberg: False\n\n*** print_tb\n File \"/home/jovyan/mpia.py\", line 160, in main\n raise RuntimeError('Not an institute paper')\nNot an institute paper \n\n[arXiv:1809.08969]: Experimental results from the ST7 mission on LISA Pathfinder\n\tG Anderson, J Anderson, M Anderson, G Aveni, D Bame, P Barela, K Blackman, A Carmain, L Chen, M Cherng, S Clark, M Connally, W Connolly, D Conroy, M Cooper, C Cutler, J D'Agostino, N Demmons, E Dorantes, C Dunn, M Duran, E Ehrbar, J Evans, J Fernandez, G Franklin, M Girard, J Gorelik, V Hruby, O Hsu, D Jackson, S Javidnia, D Kern, M Knopp, R Kolasinski, C Kuo, T Le, I Li, O Liepack, A Littlefield, P Maghami, S Malik, L Markley, \\hl{R Martin}, C Marrese-Reading, J Mehta, J Mennela, D Miller, D Nguyen, J O'Donnell, R Parikh, G Plett, T Ramsey, T Randolph, S Rhodes, A Romero-Wolf, T Roy, A Ruiz, H Shaw, J Slutsky, D Spence, J Stocky, J Tallon, I Thorpe, W Tolman, H Umfress, R Valencia, C Valerio, W Warner, J Wellman, P Willis, J Ziemer, J Zwahlen, M Armano, H Audley, J Baird, P Binetruy, , et al. (72 additional authors not shown)\nextracting tarball...\n*** Found macros and definitions in the header: \n\\providecommand{\\subf}[2]{ {\\small\\begin{tabular}[t]{@{}c@{}}\n\\providecommand{\\red}[1]{\\textcolor{red}{\\bf #1}}\n\\providecommand{\\blue}[1]{\\textcolor{blue}{\\bf #1}}\n\\providecommand{\\private}[1]{}\n\\providecommand{\\braket}[2]{\\left\\langle#1\\,|\\,#2\\,\\right\\rangle} \n\\providecommand{\\expec}[1]{\\langle#1\\rangle} \n\\providecommand{\\be}{\\begin{equation}}\n\\providecommand{\\ee}{\\end{equation}}\n\\providecommand{\\bea}{\\begin{eqnarray}}\n\\providecommand{\\eea}{\\end{eqnarray}}\n\\providecommand{\\bdm}{\\begin{displaymath}}\n\\providecommand{\\edm}{\\end{displaymath}}\n\\providecommand{\\drm}{{\\rm d}}\\def\\lesssim{\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$<$}}}}\n\\def\\gtrsim{\\mathrel{\\hbox{\\rlap{\\hbox{\\lower4pt\\hbox{$\\sim$}}}\\hbox{$>$}}}}\n\\def\\cos{\\rm cos}\n\\def\\sin{\\rm sin}\n\n**** From Heidelberg: False\n\n*** print_tb\n File \"/home/jovyan/mpia.py\", line 160, in main\n raise RuntimeError('Not an institute paper')\nNot an institute paper \n\n[arXiv:1809.09009]: Plausible home stars of the interstellar object 'Oumuamua found in Gaia DR2\n\t\\hl{C.A.L. Bailer-Jones}, (MPIA Heidelberg), , D. Farnocchia, (JPL), , K.J. Meech, (Uni. Hawai'i), , R. Brasser, (Tokyo Institute of Technology), , M. Micheli, (ESA SSA-NEO Coordination Centre), , S. Chakrabarti, (Rochester Institute of Technology), , M.W. Buie, (Southwest Research Institute), , O.R. Hainaut, (ESO)\nextracting tarball...\n*** Found document inclusions \n input command: figures/cands41plushome4_encounters\n*** Found macros and definitions in the header: \n\\providecommand{\\deriv}{\\ensuremath{\\mathrm{d}}}\n\\providecommand{\\given}{\\ensuremath{\\hspace{0.05em}\\mid\\hspace{0.05em}}}\n\\providecommand{\\okina}{`}\n\\providecommand{\\uone}{1I/2017~U1}\n\\providecommand{\\oum}{{{\\okina}Oumuamua}}\n\\providecommand{\\Hawaii}{Hawai{\\okina}i}\n\\providecommand{\\rainf}{\\ensuremath{\\ra_\\infty}}\n\\providecommand{\\decinf}{\\ensuremath{\\dec_\\infty}}\n\\providecommand{\\vinf}{\\ensuremath{v_\\infty}}\n\\providecommand{\\candsaa}{2\\,k=2}\n\\providecommand{\\candsbb}{3\\,k=2}\n\\providecommand{\\candscc}{3\\,k=1}\n\\providecommand{\\candsdd}{7d}\n\\providecommand{\\candsee}{7e}\n\\providecommand{\\candsff}{7c}\n\\providecommand{\\gaia}{Gaia}\n\\providecommand{\\gdr}[1]{Gaia\\,DR{#1}}\n\\providecommand{\\gmag}{\\ensuremath{G}}\n\\providecommand{\\mg}{M$_\\gmag$}\n\\providecommand{\\bprp}{BP-RP}\n\\providecommand{\\teff}{\\ensuremath{T_{\\rm eff}}}\n\\providecommand{\\tenc}{\\ensuremath{t_{\\rm enc}}}\n\\providecommand{\\denc}{\\ensuremath{d_{\\rm enc}}}\n\\providecommand{\\venc}{\\ensuremath{v_{\\rm enc}}}\n\\providecommand{\\tenclma}{\\ensuremath{t_{\\rm enc}^{\\rm lma}}}\n\\providecommand{\\denclma}{\\ensuremath{d_{\\rm enc}^{\\rm lma}}}\n\\providecommand{\\venclma}{\\ensuremath{v_{\\rm enc}^{\\rm lma}}}\n\\providecommand{\\tencmed}{\\ensuremath{t_{\\rm enc}^{\\rm med}}}\n\\providecommand{\\dencmed}{\\ensuremath{d_{\\rm enc}^{\\rm med}}}\n\\providecommand{\\vencmed}{\\ensuremath{v_{\\rm enc}^{\\rm med}}}\n\\providecommand{\\ra}{\\ensuremath{\\alpha}}\n\\providecommand{\\dec}{\\ensuremath{\\delta}}\n\\providecommand{\\pmra}{\\ensuremath{\\mu_{\\ra\\ast}}}\n\\providecommand{\\parallax}{\\ensuremath{\\varpi}}\n\\providecommand{\\parzp}{\\ensuremath{\\varpi_{\\rm zp}}}\n\\providecommand{\\sigparallax}{\\ensuremath{\\sigma_{\\varpi}}}\n\\providecommand{\\pmdec}{\\ensuremath{\\mu_\\dec}}\n\\providecommand{\\propm}{\\ensuremath{\\mu}}\n\\providecommand{\\vx}{\\ensuremath{v_x}}\n\\providecommand{\\vy}{\\ensuremath{v_y}}\n\\providecommand{\\vz}{\\ensuremath{v_z}}\n\\providecommand{\\sigmavx}{\\ensuremath{\\sigma(\\vx)}}\n\\providecommand{\\sigmavy}{\\ensuremath{\\sigma(\\vy)}}\n\\providecommand{\\sigmavz}{\\ensuremath{\\sigma(\\vz)}}\n\\providecommand{\\corvxvy}{\\ensuremath{\\rho(\\vx, \\vy)}}\n\\providecommand{\\corvxvz}{\\ensuremath{\\rho(\\vx, \\vz)}}\n\\providecommand{\\corvyvz}{\\ensuremath{\\rho(\\vy, \\vz)}}\n\\providecommand{\\vr}{\\ensuremath{v_r}} \n\\providecommand{\\sigvr}{\\ensuremath{\\sigma(\\vr)}}\n\\providecommand{\\vtan}{\\ensuremath{v_T}}\n\\providecommand{\\rsol}{\\ensuremath{r_\\odot}}\n\\providecommand{\\zsol}{\\ensuremath{z_\\odot}}\n\\providecommand{\\glon}{\\ensuremath{l}}\n\\providecommand{\\glat}{\\ensuremath{b}}\n\\providecommand{\\rvec}{\\ensuremath{\\boldsymbol{r}}}\n\\providecommand{\\vvec}{\\ensuremath{\\boldsymbol{v}}}\n\\providecommand{\\kms}{\\ensuremath{\\textrm{km\\,s}^{-1}}}\n\\providecommand{\\maspyr}{\\ensuremath{\\textrm{mas\\,yr}^{-1}}}\n\\providecommand{\\msun}{\\ensuremath{M_\\odot}}\n\\providecommand{\\degree}{\\ensuremath{^\\circ}}\n\\providecommand{\\red}{\\textcolor{red}}\n\\providecommand{\\blue}{\\textcolor{blue}}\\def\\myeol{\\\\}\n\\definecolor{VeryDarkBlue}{RGB}{0,0,80}\n\\definecolor{VeryDarkRed}{RGB}{90,0,00}\n\n**** From Heidelberg: True\n\n\\hl{C.A.L. Bailer-Jones}, et al.; incl. \nPDF postage: 1809.09009.pdf\n[arXiv:1809.09080]: Detecting Water In the atmosphere of HR 8799 c with L-band High Dispersion Spectroscopy Aided By Adaptive Optics\n\t\\hl{Ji Wang}, Dimitri Mawet Jonathan J. Fortney, Callie Hood, Caroline V. Morley, Bjorn Benneke\nextracting tarball...\nmultiple tex files\nFound main document in: (0, './tmp/ms.tex')\nFound main document in: ./tmp/ms.tex\n0 ./tmp/ms.tex\nFound main document in: ./tmp/ms.tex\n*** Found document inclusions \n input command: obs_summary\n input command: Telescope_Instrument\n input command: HR8799c\n input command: Simulation_results\n input command: Telescope_Instrument_Sun_Earth\n*** print_tb\n File \"/home/jovyan/app.py\", line 847, in _expand_auxilary_files\n with open(directory + fname + '.tex', 'r', errors=\"surrogateescape\") as fauxilary:\n[Errno 2] No such file or directory: './tmp/Telescope_Instrument_Sun_Earth.tex' \n\n input command: Sun_Earth\n*** print_tb\n File \"/home/jovyan/app.py\", line 847, in _expand_auxilary_files\n with open(directory + fname + '.tex', 'r', errors=\"surrogateescape\") as fauxilary:\n[Errno 2] No such file or directory: './tmp/Sun_Earth.tex' \n\n*** Found macros and definitions in the header: \n\\providecommand{\\totaltargets}{138 }\n\\providecommand{\\totalplanets}{97 }\n\\providecommand{\\totalmulti}{27 }\n\\providecommand{\\detectstar}{42 } \n\\providecommand{\\detectsys}{35 } \n\\providecommand{\\rvstar}{22 }\n\\providecommand{\\myaostar}{60 }\n\\providecommand{\\myaopalomar}{68 }\n\\providecommand{\\myaokeck}{5 }\n\\providecommand{\\myaototal}{73 }\n\\providecommand{\\myaototalno}{65 }\n\\providecommand{\\myaonewstar}{29 }\n\\providecommand{\\myaonewsys}{22 }\n\\providecommand{\\myaonewcolor}{8 }\n\\providecommand{\\myaomiss}{11 }\n\\providecommand{\\multicolor}{21 }\n\\providecommand{\\multicolorod}{6 }\n\\providecommand{\\multicolorsubarc}{5 }\n\\providecommand{\\singleall}{38 }\n\\providecommand{\\singleK}{29 }\n\\providecommand{\\starK}{51 }\n\\providecommand{\\rhk}{\\mbox{$\\log R^\\prime_{\\rm HK}$}}\n\\providecommand{\\vdag}{(v)^\\dagger}\\def\\au{\\mbox{au}}\n\n**** From Heidelberg: False\n\n*** print_tb\n File \"/home/jovyan/mpia.py\", line 160, in main\n raise RuntimeError('Not an institute paper')\nNot an institute paper \n\n Issues =============================== \n[arXiv:1809.08261] Jason J. Wang, Zhaohuan Zhu \n Not an institute paper\n[arXiv:1809.08301] Alex B. Walter \n Not an institute paper\n[arXiv:1809.08348] Eve J. Lee \n Not an institute paper\n[arXiv:1809.08499] Dong-Hong Wu, Rachel C. Zhang \n Not an institute paper\n[arXiv:1809.08501] Shiang-Yu Wang \n Not an institute paper\n[arXiv:1809.08739] Gang Wu \n Not an institute paper\n[arXiv:1809.08798] Jordy Bouwman \n Not an institute paper\n[arXiv:1809.08800] Jordy Bouwman \n not a gzip file\n[arXiv:1809.08904] H.-G. Ludwig \n Not an institute paper\n[arXiv:1806.02981] Ming-Jian Zhang \n Not an institute paper\n[arXiv:1809.08969] R Martin \n Not an institute paper\n[arXiv:1809.09080] Ji Wang \n Not an institute paper\n Matched Authors ====================== \n[arXiv:1809.08236] Wel Arjen van der Wel\n[arXiv:1809.08236] Wu Po-Feng Wu\n[arXiv:1809.08236] Barisic Ivana Barisic\n[arXiv:1809.08236] Chauke Priscilla Chauke\n[arXiv:1809.08236] Houdt Josha van Houdt\n[arXiv:1809.08239] Pillepich Annalisa Pillepich\n[arXiv:1809.08241] Joshi Gandhali Joshi\n[arXiv:1809.08243] Gould Andrew Gould\n[arXiv:1809.08245] Martin Nicolas Martin\n[arXiv:1809.08261] Wang Jason J. Wang\n[arXiv:1809.08261] Zhu Zhaohuan Zhu\n[arXiv:1809.08301] Walter Alex B. Walter\n[arXiv:1809.08338] Soler J. D. Soler\n[arXiv:1809.08338] Beuther H. Beuther\n[arXiv:1809.08338] Rugel M. Rugel\n[arXiv:1809.08338] Wang Y. Wang\n[arXiv:1809.08338] Henning Th. Henning\n[arXiv:1809.08338] Kainulainen J. Kainulainen\n[arXiv:1809.08338] Mottram J. C. Mottram\n[arXiv:1809.08348] Lee Eve J. Lee\n[arXiv:1809.08354] Feldt M. Feldt\n[arXiv:1809.08354] Cantalloube F. Cantalloube\n[arXiv:1809.08354] Keppler M. Keppler\n[arXiv:1809.08354] Maire A.-L. Maire\n[arXiv:1809.08354] Mueller A. Mueller\n[arXiv:1809.08354] Samland M. Samland\n[arXiv:1809.08354] Henning T. Henning\n[arXiv:1809.08385] Henning T. Henning\n[arXiv:1809.08499] Wu Dong-Hong Wu\n[arXiv:1809.08499] Zhang Rachel C. Zhang\n[arXiv:1809.08501] Wang Shiang-Yu Wang\n[arXiv:1809.08739] Wu Gang Wu\n[arXiv:1809.08798] Bouwman Jordy Bouwman\n[arXiv:1809.08800] Bouwman Jordy Bouwman\n[arXiv:1809.08844] Avenhaus Henning Avenhaus\n[arXiv:1809.08844] Bertrang Gesa H. -M. Bertrang\n[arXiv:1809.08844] Schreiber Matthias R. Schreiber\n[arXiv:1809.08879] Jordán Andrés Jordán\n[arXiv:1809.08879] Espinoza Néstor Espinoza\n[arXiv:1809.08879] Henning Thomas Henning\n[arXiv:1809.08879] Rabus Markus Rabus\n[arXiv:1809.08879] Sarkis Paula Sarkis\n[arXiv:1809.08904] Ludwig H.-G. Ludwig\n[arXiv:1806.02981] Zhang Ming-Jian Zhang\n[arXiv:1809.08969] Martin R Martin\n[arXiv:1809.09009] Bailer-Jones C.A.L. Bailer-Jones\n[arXiv:1809.09080] Wang Ji Wang\n Compiled outputs ===================== \n[arXiv:1809.08236] Arjen van der Wel, Po-Feng Wu, Ivana Barisic, Priscilla Chauke, Josha van Houdt\n[arXiv:1809.08239] Annalisa Pillepich\n[arXiv:1809.08241] Gandhali Joshi\n[arXiv:1809.08243] Andrew Gould\n[arXiv:1809.08245] Nicolas Martin\n[arXiv:1809.08338] J. D. Soler, H. Beuther, M. Rugel, Y. Wang, Th. Henning, J. Kainulainen, J. C. Mottram\n[arXiv:1809.08354] M. Feldt, F. Cantalloube, M. Keppler, A.-L. Maire, A. Mueller, M. Samland, T. Henning\n[arXiv:1809.08385] T. Henning\n[arXiv:1809.08844] Henning Avenhaus, Gesa H. -M. Bertrang, Matthias R. Schreiber\n[arXiv:1809.08879] Andrés Jordán, Néstor Espinoza, Thomas Henning, Markus Rabus, Paula Sarkis\n[arXiv:1809.09009] C.A.L. Bailer-Jones\n"
],
[
"# Some current security measures prevent loading properly pdf previews.\n# Converting pdfs into pngs.\n\n!for f in `ls *pdf`; do echo ${f} && convert ${f} ${f}.png; done",
"1809.08236.pdf\n1809.08239.pdf\n1809.08241.pdf\n"
],
[
"# Display preview of the compiled outputs\n\nfrom IPython.display import IFrame, HTML\nfrom glob import glob\n\ncode = ''\nfor fname in glob('*.png'):\n code += IFrame(fname, width=600, height=1000)._repr_html_()\n \nHTML(code)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
d0dbceab2ba05cf6070604a214808fd41416189b | 38,392 | ipynb | Jupyter Notebook | docs/remapping.ipynb | BenniSchmiedel/ECO | 6359240d96a801529fe8c428f1e87dab33da3eff | [
"MIT"
] | 2 | 2021-03-17T14:40:45.000Z | 2022-02-17T09:09:51.000Z | docs/remapping.ipynb | BenniSchmiedel/ECO | 6359240d96a801529fe8c428f1e87dab33da3eff | [
"MIT"
] | null | null | null | docs/remapping.ipynb | BenniSchmiedel/ECO | 6359240d96a801529fe8c428f1e87dab33da3eff | [
"MIT"
] | null | null | null | 82.741379 | 12,220 | 0.786049 | [
[
[
"# Conservative remapping",
"_____no_output_____"
]
],
[
[
"import xgcm\nimport xarray as xr\nimport numpy as np\nimport xbasin",
"_____no_output_____"
]
],
[
[
"We open the example data and create 2 grids: 1 for the dataset we have and 1 for the remapped one.\nHere '_fr' means *from* and '_to' *to* (i.e. remapped data).",
"_____no_output_____"
]
],
[
[
"ds = xr.open_dataset('data/nemo_output_ex.nc')\n\nfrom xnemogcm import open_nemo_and_domain_cfg\nds = open_nemo_and_domain_cfg(datadir='/home/romain/Documents/Education/PhD/Courses/2019-OC6310/Project/Experiments/EXP_eos00/Rawdata')\n\nmetrics_fr = {\n ('X',): ['e1t', 'e1u', 'e1v', 'e1f'],\n ('Y',): ['e2t', 'e2u', 'e2v', 'e2f'],\n ('Z',): ['e3t', 'e3u', 'e3v', 'e3w']\n}\nmetrics_to = {\n ('X',): ['e1t', 'e1u', 'e1v', 'e1f'],\n ('Y',): ['e2t', 'e2u', 'e2v', 'e2f'],\n ('Z',): ['e3t_1d', 'e3w_1d']\n}\ngrid_fr = xgcm.Grid(ds, periodic=False, metrics=metrics_fr)\ngrid_to = xgcm.Grid(ds, periodic=False, metrics=metrics_to)\n\n# Convert the thetao float32 to float64 for more precision\nds.thetao.values = ds.thetao.values.astype(np.float64)\n\nprint(ds)",
"<xarray.Dataset>\nDimensions: (axis_nbounds: 2, t: 5, x_c: 21, x_f: 21, y_c: 40, y_f: 40, z_c: 36, z_f: 36)\nCoordinates:\n * z_f (z_f) float64 -0.5 0.5 1.5 2.5 3.5 ... 31.5 32.5 33.5 34.5\n * t (t) object 7756-01-01 00:00:00 ... 7796-01-01 00:00:00\n * x_c (x_c) int64 0 1 2 3 4 5 6 7 8 ... 12 13 14 15 16 17 18 19 20\n * y_c (y_c) int64 0 1 2 3 4 5 6 7 8 ... 31 32 33 34 35 36 37 38 39\n * z_c (z_c) int64 0 1 2 3 4 5 6 7 8 ... 27 28 29 30 31 32 33 34 35\n * x_f (x_f) float64 0.5 1.5 2.5 3.5 4.5 ... 17.5 18.5 19.5 20.5\n * y_f (y_f) float64 0.5 1.5 2.5 3.5 4.5 ... 36.5 37.5 38.5 39.5\nDimensions without coordinates: axis_nbounds\nData variables:\n depthw_bounds (z_f, axis_nbounds) float32 ...\n t_bounds (t, axis_nbounds) object ...\n woce (t, z_f, y_c, x_c) float32 ...\n e3w (t, z_f, y_c, x_c) float32 ...\n deptht_bounds (z_c, axis_nbounds) float32 ...\n e3t (t, z_c, y_c, x_c) float32 ...\n thetao (t, z_c, y_c, x_c) float64 0.0 24.28 24.15 ... 0.0 0.0 0.0\n so (t, z_c, y_c, x_c) float32 ...\n rhop (t, z_c, y_c, x_c) float32 ...\n tos (t, y_c, x_c) float32 ...\n sos (t, y_c, x_c) float32 ...\n zos (t, y_c, x_c) float32 ...\n bn2 (t, z_c, y_c, x_c) float32 ...\n mldr10_1 (t, y_c, x_c) float32 ...\n relvor (t, z_c, y_c, x_c) float32 ...\n qsr (t, y_c, x_c) float32 ...\n qns (t, y_c, x_c) float32 ...\n empmr (t, y_c, x_c) float32 ...\n qt (t, y_c, x_c) float32 ...\n saltflx (t, y_c, x_c) float32 ...\n botpres (t, y_c, x_c) float32 ...\n depthu_bounds (z_c, axis_nbounds) float32 ...\n e3u (t, z_c, y_c, x_f) float32 ...\n uos (t, y_c, x_f) float32 ...\n uo (t, z_c, y_c, x_f) float32 ...\n depthv_bounds (z_c, axis_nbounds) float32 ...\n e3v (t, z_c, y_f, x_c) float32 ...\n vos (t, y_f, x_c) float32 ...\n vo (t, z_c, y_f, x_c) float32 ...\n nav_lon (y_c, x_c) float32 ...\n nav_lat (y_c, x_c) float32 ...\n jpiglo int32 ...\n jpjglo int32 ...\n jpkglo int32 ...\n jperio int32 ...\n ln_zco int32 ...\n ln_zps int32 ...\n ln_sco int32 ...\n ln_isfcav int32 ...\n glamt (y_c, x_c) float64 ...\n glamu (y_c, x_f) float64 ...\n glamv (y_f, x_c) float64 ...\n glamf (y_f, x_f) float64 ...\n gphit (y_c, x_c) float64 ...\n gphiu (y_c, x_f) float64 ...\n gphiv (y_f, x_c) float64 ...\n gphif (y_f, x_f) float64 ...\n e1t (y_c, x_c) float64 ...\n e1u (y_c, x_f) float64 ...\n e1v (y_f, x_c) float64 ...\n e1f (y_f, x_f) float64 ...\n e2t (y_c, x_c) float64 ...\n e2u (y_c, x_f) float64 ...\n e2v (y_f, x_c) float64 ...\n e2f (y_f, x_f) float64 ...\n ff_f (y_f, x_f) float64 ...\n ff_t (y_c, x_c) float64 ...\n e3t_1d (z_c) float64 ...\n e3w_1d (z_f) float64 ...\n e3t_0 (z_c, y_c, x_c) float64 ...\n e3u_0 (z_c, y_c, x_f) float64 ...\n e3v_0 (z_c, y_f, x_c) float64 ...\n e3f_0 (z_c, y_f, x_f) float64 ...\n e3w_0 (z_f, y_c, x_c) float64 ...\n e3uw_0 (z_f, y_c, x_f) float64 ...\n e3vw_0 (z_f, y_f, x_c) float64 ...\n top_level (y_c, x_c) float64 ...\n bottom_level (y_c, x_c) float64 ...\n stiffness (y_c, x_c) float64 ...\n gdept_0 (z_c, y_c, x_c) float64 ...\n gdepw_0 (z_f, y_c, x_c) float64 ...\n ht_0 (y_c, x_c) float64 ...\n hu_0 (y_c, x_f) float64 ...\n hv_0 (y_f, x_c) float64 ...\n tmask (z_c, y_c, x_c) float64 ...\n umask (z_c, y_c, x_f) float64 ...\n vmask (z_c, y_f, x_c) float64 ...\n fmask (z_c, y_f, x_f) float64 ...\n tmaskutil (y_c, x_c) float64 ...\n umaskutil (y_c, x_f) float64 ...\n vmaskutil (y_f, x_c) float64 ...\n mbathy (y_c, x_c) float64 ...\n misf (y_c, x_c) float64 ...\n isfdraft (y_c, x_c) float64 ...\n gdept_1d (z_c) float64 ...\n gdepw_1d (z_f) float64 ...\n fmaskutil (y_f, x_f) float64 ...\n"
]
],
[
[
"## Remap a T point",
"_____no_output_____"
]
],
[
[
"%timeit xbasin.remap_vertical(ds.thetao, grid_fr, grid_to, axis='Z')",
"14.1 ms ± 623 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)\n"
],
[
"theta_to = xbasin.remap_vertical(ds.thetao, grid_fr, grid_to, axis='Z')\nprint(theta_to.coords)",
"Coordinates:\n * z_c (z_c) int64 0 1 2 3 4 5 6 7 8 9 ... 26 27 28 29 30 31 32 33 34 35\n * t (t) object 7756-01-01 00:00:00 ... 7796-01-01 00:00:00\n * x_c (x_c) int64 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20\n * y_c (y_c) int64 0 1 2 3 4 5 6 7 8 9 ... 30 31 32 33 34 35 36 37 38 39\n"
]
],
[
[
"The total heat content is conserved:",
"_____no_output_____"
]
],
[
[
"hc_fr = grid_fr.integrate(ds.thetao, axis='Z')\nhc_to = grid_to.integrate(theta_to, axis='Z')\n\n(hc_fr == hc_to).all()",
"_____no_output_____"
]
],
[
[
"## Remap a W point",
"_____no_output_____"
]
],
[
[
"w_to = xbasin.remap_vertical(ds.woce*0+1, grid_fr, grid_to, axis='Z')\ngrid_to.integrate(w_to, axis='Z')[-1].plot()",
"_____no_output_____"
],
[
"grid_fr.integrate((ds.woce*0+1), axis='Z')[-1].plot()",
"_____no_output_____"
]
],
[
[
"## Time comparison\n\nThe heart function of the remapping is computed from python to C++ with pythran, which improves the speed. However if pythran is not installed, the original python function is called instead.\nAs a user, you should not use the 2 following functions, they are only shown here for the time comparison.",
"_____no_output_____"
]
],
[
[
"fake_dataset = [\n np.ascontiguousarray(ds.gdept_0.values.reshape(ds.gdept_0.values.shape+(1,))),\n np.ascontiguousarray(ds.gdepw_0.values.reshape(ds.gdepw_0.values.shape+(1,))),\n np.ascontiguousarray(ds.thetao.transpose('z_c', 'y_c', 'x_c', 't').values.flatten().reshape(ds.thetao.transpose('z_c', 'y_c', 'x_c', 't').shape)[...,0:1])\n]\nfrom xbasin._interpolation import interp_new_vertical as _interpolation_pure_python\nfrom xbasin.interpolation_compiled import interp_new_vertical as _interpolation_pythran",
"_____no_output_____"
]
],
[
[
"### Pure Python",
"_____no_output_____"
]
],
[
[
"%timeit _interpolation_pure_python(*fake_dataset)",
"120 ms ± 1.1 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\n"
]
],
[
[
"### Pythran",
"_____no_output_____"
]
],
[
[
"%timeit _interpolation_pythran(*fake_dataset)",
"2.66 ms ± 16.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n"
]
],
[
[
"We see that the compiled version runs about 10-100 times faster (of course this number is just a rough approximation). The pure Python version does not use vectorized arrays and is thus slower.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0dbd46313b8dbb2e0e898f37a05cef51e548a19 | 993,644 | ipynb | Jupyter Notebook | tensorflow/examples/tutorials/deepdream/deepdream.ipynb | espuer/tensorflow | 90ca8ecf4a5f4e3660aa413364f8b9f34392756f | [
"Apache-2.0"
] | 6 | 2016-09-07T18:38:41.000Z | 2020-01-12T23:01:03.000Z | tensorflow/examples/tutorials/deepdream/deepdream.ipynb | espuer/tensorflow | 90ca8ecf4a5f4e3660aa413364f8b9f34392756f | [
"Apache-2.0"
] | null | null | null | tensorflow/examples/tutorials/deepdream/deepdream.ipynb | espuer/tensorflow | 90ca8ecf4a5f4e3660aa413364f8b9f34392756f | [
"Apache-2.0"
] | 8 | 2017-06-08T09:46:06.000Z | 2021-06-20T14:03:19.000Z | 717.950867 | 167,805 | 0.853743 | [
[
[
"# DeepDreaming with TensorFlow",
"_____no_output_____"
],
[
">[Loading and displaying the model graph](#loading)\n\n>[Naive feature visualization](#naive)\n\n>[Multiscale image generation](#multiscale)\n\n>[Laplacian Pyramid Gradient Normalization](#laplacian)\n\n>[Playing with feature visualzations](#playing)\n\n>[DeepDream](#deepdream)\n\n",
"_____no_output_____"
],
[
"This notebook demonstrates a number of Convolutional Neural Network image generation techniques implemented with TensorFlow for fun and science:\n\n- visualize individual feature channels and their combinations to explore the space of patterns learned by the neural network (see [GoogLeNet](http://storage.googleapis.com/deepdream/visualz/tensorflow_inception/index.html) and [VGG16](http://storage.googleapis.com/deepdream/visualz/vgg16/index.html) galleries)\n- embed TensorBoard graph visualizations into Jupyter notebooks\n- produce high-resolution images with tiled computation ([example](http://storage.googleapis.com/deepdream/pilatus_flowers.jpg))\n- use Laplacian Pyramid Gradient Normalization to produce smooth and colorful visuals at low cost\n- generate DeepDream-like images with TensorFlow (DogSlugs included)\n\n\nThe network under examination is the [GoogLeNet architecture](http://arxiv.org/abs/1409.4842), trained to classify images into one of 1000 categories of the [ImageNet](http://image-net.org/) dataset. It consists of a set of layers that apply a sequence of transformations to the input image. The parameters of these transformations were determined during the training process by a variant of gradient descent algorithm. The internal image representations may seem obscure, but it is possible to visualize and interpret them. In this notebook we are going to present a few tricks that allow to make these visualizations both efficient to generate and even beautiful. Impatient readers can start with exploring the full galleries of images generated by the method described here for [GoogLeNet](http://storage.googleapis.com/deepdream/visualz/tensorflow_inception/index.html) and [VGG16](http://storage.googleapis.com/deepdream/visualz/vgg16/index.html) architectures.",
"_____no_output_____"
]
],
[
[
"# boilerplate code\nfrom __future__ import print_function\nimport os\nfrom io import BytesIO\nimport numpy as np\nfrom functools import partial\nimport PIL.Image\nfrom IPython.display import clear_output, Image, display, HTML\n\nimport tensorflow as tf",
"_____no_output_____"
]
],
[
[
"<a id='loading'></a>\n## Loading and displaying the model graph\n\nThe pretrained network can be downloaded [here](https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip). Unpack the `tensorflow_inception_graph.pb` file from the archive and set its path to `model_fn` variable. Alternatively you can uncomment and run the following cell to download the network:",
"_____no_output_____"
]
],
[
[
"#!wget https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip && unzip inception5h.zip",
"_____no_output_____"
],
[
"model_fn = 'tensorflow_inception_graph.pb'\n\n# creating TensorFlow session and loading the model\ngraph = tf.Graph()\nsess = tf.InteractiveSession(graph=graph)\nwith tf.gfile.FastGFile(model_fn, 'rb') as f:\n graph_def = tf.GraphDef()\n graph_def.ParseFromString(f.read())\nt_input = tf.placeholder(np.float32, name='input') # define the input tensor\nimagenet_mean = 117.0\nt_preprocessed = tf.expand_dims(t_input-imagenet_mean, 0)\ntf.import_graph_def(graph_def, {'input':t_preprocessed})",
"_____no_output_____"
]
],
[
[
"To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens to hundreds of feature channels, so we have plenty of patterns to explore.",
"_____no_output_____"
]
],
[
[
"layers = [op.name for op in graph.get_operations() if op.type=='Conv2D' and 'import/' in op.name]\nfeature_nums = [int(graph.get_tensor_by_name(name+':0').get_shape()[-1]) for name in layers]\n\nprint('Number of layers', len(layers))\nprint('Total number of feature channels:', sum(feature_nums))\n\n\n# Helper functions for TF Graph visualization\n\ndef strip_consts(graph_def, max_const_size=32):\n \"\"\"Strip large constant values from graph_def.\"\"\"\n strip_def = tf.GraphDef()\n for n0 in graph_def.node:\n n = strip_def.node.add() \n n.MergeFrom(n0)\n if n.op == 'Const':\n tensor = n.attr['value'].tensor\n size = len(tensor.tensor_content)\n if size > max_const_size:\n tensor.tensor_content = \"<stripped %d bytes>\"%size\n return strip_def\n \ndef rename_nodes(graph_def, rename_func):\n res_def = tf.GraphDef()\n for n0 in graph_def.node:\n n = res_def.node.add() \n n.MergeFrom(n0)\n n.name = rename_func(n.name)\n for i, s in enumerate(n.input):\n n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])\n return res_def\n \ndef show_graph(graph_def, max_const_size=32):\n \"\"\"Visualize TensorFlow graph.\"\"\"\n if hasattr(graph_def, 'as_graph_def'):\n graph_def = graph_def.as_graph_def()\n strip_def = strip_consts(graph_def, max_const_size=max_const_size)\n code = \"\"\"\n <script>\n function load() {{\n document.getElementById(\"{id}\").pbtxt = {data};\n }}\n </script>\n <link rel=\"import\" href=\"https://tensorboard.appspot.com/tf-graph-basic.build.html\" onload=load()>\n <div style=\"height:600px\">\n <tf-graph-basic id=\"{id}\"></tf-graph-basic>\n </div>\n \"\"\".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))\n \n iframe = \"\"\"\n <iframe seamless style=\"width:800px;height:620px;border:0\" srcdoc=\"{}\"></iframe>\n \"\"\".format(code.replace('\"', '"'))\n display(HTML(iframe))\n\n# Visualizing the network graph. Be sure expand the \"mixed\" nodes to see their \n# internal structure. We are going to visualize \"Conv2D\" nodes.\ntmp_def = rename_nodes(graph_def, lambda s:\"/\".join(s.split('_',1)))\nshow_graph(tmp_def)",
"Number of layers 59\nTotal number of feature channels: 7548\n"
]
],
[
[
"<a id='naive'></a>\n## Naive feature visualization",
"_____no_output_____"
],
[
"Let's start with a naive way of visualizing these. Image-space gradient ascent!",
"_____no_output_____"
]
],
[
[
"# Picking some internal layer. Note that we use outputs before applying the ReLU nonlinearity\n# to have non-zero gradients for features with negative initial activations.\nlayer = 'mixed4d_3x3_bottleneck_pre_relu'\nchannel = 139 # picking some feature channel to visualize\n\n# start with a gray image with a little noise\nimg_noise = np.random.uniform(size=(224,224,3)) + 100.0\n\ndef showarray(a, fmt='jpeg'):\n a = np.uint8(np.clip(a, 0, 1)*255)\n f = BytesIO()\n PIL.Image.fromarray(a).save(f, fmt)\n display(Image(data=f.getvalue()))\n \ndef visstd(a, s=0.1):\n '''Normalize the image range for visualization'''\n return (a-a.mean())/max(a.std(), 1e-4)*s + 0.5\n\ndef T(layer):\n '''Helper for getting layer output tensor'''\n return graph.get_tensor_by_name(\"import/%s:0\"%layer)\n\ndef render_naive(t_obj, img0=img_noise, iter_n=20, step=1.0):\n t_score = tf.reduce_mean(t_obj) # defining the optimization objective\n t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!\n \n img = img0.copy()\n for i in range(iter_n):\n g, score = sess.run([t_grad, t_score], {t_input:img})\n # normalizing the gradient, so the same step size should work \n g /= g.std()+1e-8 # for different layers and networks\n img += g*step\n print(score, end = ' ')\n clear_output()\n showarray(visstd(img))\n\nrender_naive(T(layer)[:,:,:,channel])",
"_____no_output_____"
]
],
[
[
"<a id=\"multiscale\"></a>\n## Multiscale image generation\n\nLooks like the network wants to show us something interesting! Let's help it. We are going to apply gradient ascent on multiple scales. Details formed on smaller scale will be upscaled and augmented with additional details on the next scale.\n\nWith multiscale image generation it may be tempting to set the number of octaves to some high value to produce wallpaper-sized images. Storing network activations and backprop values will quickly run out of GPU memory in this case. There is a simple trick to avoid this: split the image into smaller tiles and compute each tile gradient independently. Applying random shifts to the image before every iteration helps avoid tile seams and improves the overall image quality.",
"_____no_output_____"
]
],
[
[
"def tffunc(*argtypes):\n '''Helper that transforms TF-graph generating function into a regular one.\n See \"resize\" function below.\n '''\n placeholders = list(map(tf.placeholder, argtypes))\n def wrap(f):\n out = f(*placeholders)\n def wrapper(*args, **kw):\n return out.eval(dict(zip(placeholders, args)), session=kw.get('session'))\n return wrapper\n return wrap\n\n# Helper function that uses TF to resize an image\ndef resize(img, size):\n img = tf.expand_dims(img, 0)\n return tf.image.resize_bilinear(img, size)[0,:,:,:]\nresize = tffunc(np.float32, np.int32)(resize)\n\n\ndef calc_grad_tiled(img, t_grad, tile_size=512):\n '''Compute the value of tensor t_grad over the image in a tiled way.\n Random shifts are applied to the image to blur tile boundaries over \n multiple iterations.'''\n sz = tile_size\n h, w = img.shape[:2]\n sx, sy = np.random.randint(sz, size=2)\n img_shift = np.roll(np.roll(img, sx, 1), sy, 0)\n grad = np.zeros_like(img)\n for y in range(0, max(h-sz//2, sz),sz):\n for x in range(0, max(w-sz//2, sz),sz):\n sub = img_shift[y:y+sz,x:x+sz]\n g = sess.run(t_grad, {t_input:sub})\n grad[y:y+sz,x:x+sz] = g\n return np.roll(np.roll(grad, -sx, 1), -sy, 0)",
"_____no_output_____"
],
[
"def render_multiscale(t_obj, img0=img_noise, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4):\n t_score = tf.reduce_mean(t_obj) # defining the optimization objective\n t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!\n \n img = img0.copy()\n for octave in range(octave_n):\n if octave>0:\n hw = np.float32(img.shape[:2])*octave_scale\n img = resize(img, np.int32(hw))\n for i in range(iter_n):\n g = calc_grad_tiled(img, t_grad)\n # normalizing the gradient, so the same step size should work \n g /= g.std()+1e-8 # for different layers and networks\n img += g*step\n print('.', end = ' ')\n clear_output()\n showarray(visstd(img))\n\nrender_multiscale(T(layer)[:,:,:,channel])",
"_____no_output_____"
]
],
[
[
"<a id=\"laplacian\"></a>\n## Laplacian Pyramid Gradient Normalization\n\nThis looks better, but the resulting images mostly contain high frequencies. Can we improve it? One way is to add a smoothness prior into the optimization objective. This will effectively blur the image a little every iteration, suppressing the higher frequencies, so that the lower frequencies can catch up. This will require more iterations to produce a nice image. Why don't we just boost lower frequencies of the gradient instead? One way to achieve this is through the [Laplacian pyramid](https://en.wikipedia.org/wiki/Pyramid_%28image_processing%29#Laplacian_pyramid) decomposition. We call the resulting technique _Laplacian Pyramid Gradient Normailzation_.",
"_____no_output_____"
]
],
[
[
"k = np.float32([1,4,6,4,1])\nk = np.outer(k, k)\nk5x5 = k[:,:,None,None]/k.sum()*np.eye(3, dtype=np.float32)\n\ndef lap_split(img):\n '''Split the image into lo and hi frequency components'''\n with tf.name_scope('split'):\n lo = tf.nn.conv2d(img, k5x5, [1,2,2,1], 'SAME')\n lo2 = tf.nn.conv2d_transpose(lo, k5x5*4, tf.shape(img), [1,2,2,1])\n hi = img-lo2\n return lo, hi\n\ndef lap_split_n(img, n):\n '''Build Laplacian pyramid with n splits'''\n levels = []\n for i in range(n):\n img, hi = lap_split(img)\n levels.append(hi)\n levels.append(img)\n return levels[::-1]\n\ndef lap_merge(levels):\n '''Merge Laplacian pyramid'''\n img = levels[0]\n for hi in levels[1:]:\n with tf.name_scope('merge'):\n img = tf.nn.conv2d_transpose(img, k5x5*4, tf.shape(hi), [1,2,2,1]) + hi\n return img\n\ndef normalize_std(img, eps=1e-10):\n '''Normalize image by making its standard deviation = 1.0'''\n with tf.name_scope('normalize'):\n std = tf.sqrt(tf.reduce_mean(tf.square(img)))\n return img/tf.maximum(std, eps)\n\ndef lap_normalize(img, scale_n=4):\n '''Perform the Laplacian pyramid normalization.'''\n img = tf.expand_dims(img,0)\n tlevels = lap_split_n(img, scale_n)\n tlevels = list(map(normalize_std, tlevels))\n out = lap_merge(tlevels)\n return out[0,:,:,:]\n\n# Showing the lap_normalize graph with TensorBoard\nlap_graph = tf.Graph()\nwith lap_graph.as_default():\n lap_in = tf.placeholder(np.float32, name='lap_in')\n lap_out = lap_normalize(lap_in)\nshow_graph(lap_graph)",
"_____no_output_____"
],
[
"def render_lapnorm(t_obj, img0=img_noise, visfunc=visstd,\n iter_n=10, step=1.0, octave_n=3, octave_scale=1.4, lap_n=4):\n t_score = tf.reduce_mean(t_obj) # defining the optimization objective\n t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!\n # build the laplacian normalization graph\n lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n))\n\n img = img0.copy()\n for octave in range(octave_n):\n if octave>0:\n hw = np.float32(img.shape[:2])*octave_scale\n img = resize(img, np.int32(hw))\n for i in range(iter_n):\n g = calc_grad_tiled(img, t_grad)\n g = lap_norm_func(g)\n img += g*step\n print('.', end = ' ')\n clear_output()\n showarray(visfunc(img))\n\nrender_lapnorm(T(layer)[:,:,:,channel])",
"_____no_output_____"
]
],
[
[
"<a id=\"playing\"></a>\n## Playing with feature visualizations\n\nWe got a nice smooth image using only 10 iterations per octave. In case of running on GPU this takes just a few seconds. Let's try to visualize another channel from the same layer. The network can generate wide diversity of patterns.",
"_____no_output_____"
]
],
[
[
"render_lapnorm(T(layer)[:,:,:,65])",
"_____no_output_____"
]
],
[
[
"Lower layers produce features of lower complexity.",
"_____no_output_____"
]
],
[
[
"render_lapnorm(T('mixed3b_1x1_pre_relu')[:,:,:,101])",
"_____no_output_____"
]
],
[
[
"There are many interesting things one may try. For example, optimizing a linear combination of features often gives a \"mixture\" pattern.",
"_____no_output_____"
]
],
[
[
"render_lapnorm(T(layer)[:,:,:,65]+T(layer)[:,:,:,139], octave_n=4)",
"_____no_output_____"
]
],
[
[
"<a id=\"deepdream\"></a>\n## DeepDream\n\nNow let's reproduce the [DeepDream algorithm](https://github.com/google/deepdream/blob/master/dream.ipynb) with TensorFlow. \n",
"_____no_output_____"
]
],
[
[
"def render_deepdream(t_obj, img0=img_noise,\n iter_n=10, step=1.5, octave_n=4, octave_scale=1.4):\n t_score = tf.reduce_mean(t_obj) # defining the optimization objective\n t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!\n\n # split the image into a number of octaves\n img = img0\n octaves = []\n for i in range(octave_n-1):\n hw = img.shape[:2]\n lo = resize(img, np.int32(np.float32(hw)/octave_scale))\n hi = img-resize(lo, hw)\n img = lo\n octaves.append(hi)\n \n # generate details octave by octave\n for octave in range(octave_n):\n if octave>0:\n hi = octaves[-octave]\n img = resize(img, hi.shape[:2])+hi\n for i in range(iter_n):\n g = calc_grad_tiled(img, t_grad)\n img += g*(step / (np.abs(g).mean()+1e-7))\n print('.',end = ' ')\n clear_output()\n showarray(img/255.0)",
"_____no_output_____"
]
],
[
[
"Let's load some image and populate it with DogSlugs (in case you've missed them).",
"_____no_output_____"
]
],
[
[
"img0 = PIL.Image.open('pilatus800.jpg')\nimg0 = np.float32(img0)\nshowarray(img0/255.0)",
"_____no_output_____"
],
[
"render_deepdream(tf.square(T('mixed4c')), img0)",
"_____no_output_____"
]
],
[
[
"Note that results can differ from the [Caffe](https://github.com/BVLC/caffe)'s implementation, as we are using an independently trained network. Still, the network seems to like dogs and animal-like features due to the nature of the ImageNet dataset.\n\nUsing an arbitrary optimization objective still works:",
"_____no_output_____"
]
],
[
[
"render_deepdream(T(layer)[:,:,:,139], img0)",
"_____no_output_____"
]
],
[
[
"Don't hesitate to use higher resolution inputs (also increase the number of octaves)! Here is an [example](http://storage.googleapis.com/deepdream/pilatus_flowers.jpg) of running the flower dream over the bigger image.",
"_____no_output_____"
],
[
"We hope that the visualization tricks described here may be helpful for analyzing representations learned by neural networks or find their use in various artistic applications.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d0dbdd38e2c2aa52b6dbe04b3c820725196539af | 366,478 | ipynb | Jupyter Notebook | docs/examples/project_progress.ipynb | david1309/neptune-contrib | 09018ce14b85579ad9f63eba922c5dff63ec9b77 | [
"MIT"
] | null | null | null | docs/examples/project_progress.ipynb | david1309/neptune-contrib | 09018ce14b85579ad9f63eba922c5dff63ec9b77 | [
"MIT"
] | null | null | null | docs/examples/project_progress.ipynb | david1309/neptune-contrib | 09018ce14b85579ad9f63eba922c5dff63ec9b77 | [
"MIT"
] | null | null | null | 1,388.174242 | 358,120 | 0.583159 | [
[
[
"# Visualize project progress\n\n## Prerequisites\nFetch project experiment view (leaderboard).",
"_____no_output_____"
]
],
[
[
"from neptune.sessions import Session\n\nsession = Session()\nproject = session.get_projects('neptune-ml')['neptune-ml/Salt-Detection']\nleadearboard = project.get_leaderboard()",
"_____no_output_____"
]
],
[
[
"## Extract project progress information\nUse the `extract_project_progress_info` function and specify your metric column and a timestamp column.",
"_____no_output_____"
]
],
[
[
"from neptunecontrib.api.utils import extract_project_progress_info\n\nprogress_df = extract_project_progress_info(leadearboard, \n metric_colname='channel_IOUT', \n time_colname='finished')\nprogress_df.head()",
"_____no_output_____"
]
],
[
[
"## Visualize the project progress\nSimply use the `project_progress` visualization funcion.",
"_____no_output_____"
]
],
[
[
"from neptunecontrib.viz.projects import project_progress\nproject_progress(progress_df, width=400, heights=[50, 200])",
"_____no_output_____"
]
],
[
[
"<html>\n<head>\n <style>\n .vega-actions a {\n margin-right: 12px;\n color: #757575;\n font-weight: normal;\n font-size: 13px;\n }\n .error {\n color: red;\n }\n </style>\n <script type=\"text/javascript\" src=\"https://cdn.jsdelivr.net/npm//vega@4\"></script>\n <script type=\"text/javascript\" src=\"https://cdn.jsdelivr.net/npm//[email protected]\"></script>\n <script type=\"text/javascript\" src=\"https://cdn.jsdelivr.net/npm//vega-embed@3\"></script>\n</head>\n<body>\n <div id=\"vis\"></div>\n <script>\n var spec = {\"config\": {\"view\": {\"width\": 400, \"height\": 300}}, \"vconcat\": [{\"mark\": {\"type\": \"line\", \"interpolate\": \"step-after\", \"size\": 5}, \"encoding\": {\"color\": {\"type\": \"nominal\", \"field\": \"actual_or_best\", \"legend\": {\"title\": \"Metric actual or current best\"}}, \"x\": {\"type\": \"temporal\", \"field\": \"timestamp\"}, \"y\": {\"type\": \"quantitative\", \"axis\": null, \"field\": \"metric\", \"scale\": {\"zero\": false}}}, \"height\": 50, \"selection\": {\"selector010\": {\"type\": \"interval\", \"encodings\": [\"x\"]}}, \"width\": 400}, {\"mark\": {\"type\": \"text\", \"align\": \"left\", \"fontWeight\": \"bold\", \"size\": 15}, \"encoding\": {\"text\": {\"condition\": {\"type\": \"nominal\", \"field\": \"text\", \"selection\": \"selector009\"}, \"value\": \" \"}, \"x\": {\"type\": \"temporal\", \"axis\": null, \"field\": \"timestamp\"}}, \"height\": 1, \"width\": 1}, {\"layer\": [{\"layer\": [{\"mark\": {\"type\": \"area\", \"interpolate\": \"step-after\"}, \"encoding\": {\"color\": {\"value\": \"red\"}, \"opacity\": {\"value\": 0.3}, \"x\": {\"type\": \"temporal\", \"field\": \"timestamp\"}, \"y\": {\"type\": \"quantitative\", \"field\": \"time_or_count\", \"scale\": {\"zero\": false}}}, \"transform\": [{\"filter\": {\"selection\": \"selector010\"}}, {\"filter\": {\"selection\": \"select\"}}]}, {\"mark\": {\"type\": \"point\", \"filled\": true}, \"encoding\": {\"color\": {\"value\": \"black\"}, \"opacity\": {\"condition\": {\"value\": 1, \"selection\": \"selector009\"}, \"value\": 0}, \"x\": {\"type\": \"temporal\", \"field\": \"timestamp\"}, \"y\": {\"type\": \"quantitative\", \"field\": \"time_or_count\", \"scale\": {\"zero\": false}}}, \"transform\": [{\"filter\": {\"selection\": \"selector010\"}}, {\"filter\": {\"selection\": \"select\"}}]}, {\"mark\": {\"type\": \"rule\", \"color\": \"gray\"}, \"encoding\": {\"x\": {\"type\": \"temporal\", \"field\": \"timestamp\"}}, \"transform\": [{\"filter\": {\"selection\": \"selector009\"}}]}, {\"mark\": {\"type\": \"text\", \"align\": \"left\", \"dx\": 5, \"dy\": -5, \"fontWeight\": \"bold\", \"size\": 15}, \"encoding\": {\"color\": {\"value\": \"black\"}, \"opacity\": {\"value\": 0.3}, \"text\": {\"condition\": {\"type\": \"quantitative\", \"field\": \"time_or_count\", \"selection\": \"selector009\"}, \"value\": \" \"}, \"x\": {\"type\": \"temporal\", \"field\": \"timestamp\"}, \"y\": {\"type\": \"quantitative\", \"field\": \"time_or_count\", \"scale\": {\"zero\": false}}}, \"transform\": [{\"filter\": {\"selection\": \"selector010\"}}, {\"filter\": {\"selection\": \"select\"}}]}, {\"mark\": \"area\", \"encoding\": {\"opacity\": {\"value\": 0}, \"x\": {\"type\": \"temporal\", \"field\": \"timestamp\"}}, \"selection\": {\"select\": {\"type\": \"single\", \"fields\": [\"resource\"], \"bind\": {\"input\": \"select\", \"options\": [\"running_time_day\", \"experiment_count_day\"]}}}, \"transform\": [{\"filter\": {\"selection\": \"select\"}}, {\"filter\": {\"selection\": \"selector010\"}}]}], \"height\": 200, \"width\": 400}, {\"layer\": [{\"mark\": {\"type\": \"line\", \"interpolate\": \"step-after\", \"size\": 5}, \"encoding\": {\"color\": {\"type\": \"nominal\", \"field\": \"actual_or_best\", \"legend\": {\"title\": \"Metric actual or current best\"}}, \"x\": {\"type\": \"temporal\", \"field\": \"timestamp\"}, \"y\": {\"type\": \"quantitative\", \"field\": \"metric\", \"scale\": {\"zero\": false}}}, \"transform\": [{\"filter\": {\"selection\": \"selector010\"}}]}, {\"mark\": \"point\", \"encoding\": {\"color\": {\"type\": \"nominal\", \"field\": \"actual_or_best\", \"legend\": {\"title\": \"Metric actual or current best\"}}, \"opacity\": {\"condition\": {\"value\": 1, \"selection\": \"selector009\"}, \"value\": 0}, \"x\": {\"type\": \"temporal\", \"field\": \"timestamp\"}, \"y\": {\"type\": \"quantitative\", \"field\": \"metric\", \"scale\": {\"zero\": false}}}, \"transform\": [{\"filter\": {\"selection\": \"selector010\"}}]}, {\"mark\": {\"type\": \"text\", \"align\": \"left\", \"dx\": 5, \"dy\": -5, \"size\": 15}, \"encoding\": {\"color\": {\"type\": \"nominal\", \"field\": \"actual_or_best\"}, \"text\": {\"condition\": {\"type\": \"quantitative\", \"field\": \"metric\", \"selection\": \"selector009\"}, \"value\": \" \"}, \"x\": {\"type\": \"temporal\", \"field\": \"timestamp\"}, \"y\": {\"type\": \"quantitative\", \"field\": \"metric\", \"scale\": {\"zero\": false}}}, \"transform\": [{\"filter\": {\"selection\": \"selector010\"}}]}, {\"mark\": {\"type\": \"rule\", \"color\": \"gray\"}, \"encoding\": {\"x\": {\"type\": \"temporal\", \"field\": \"timestamp\"}}, \"transform\": [{\"filter\": {\"selection\": \"selector009\"}}]}, {\"mark\": \"point\", \"encoding\": {\"opacity\": {\"value\": 0}, \"x\": {\"type\": \"temporal\", \"field\": \"timestamp\"}}, \"selection\": {\"selector009\": {\"type\": \"single\", \"nearest\": true, \"on\": \"mouseover\", \"fields\": [\"timestamp\"], \"empty\": \"none\"}}, \"transform\": [{\"filter\": {\"selection\": \"selector010\"}}]}], \"height\": 200, \"width\": 400}], \"height\": 200, \"resolve\": {\"scale\": {\"y\": \"independent\"}}, \"width\": 400}], \"data\": {\"name\": \"data-f5484b89b2249af62b6b6521f648c083\"}, \"$schema\": \"https://vega.github.io/schema/vega-lite/v2.6.0.json\", \"datasets\": {\"data-f5484b89b2249af62b6b6521f648c083\": [{\"id\": \"SAL-239\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"actual\", \"metric\": 0.408, \"running_time\": 19, \"owner\": \"czakon\", \"tags\": [\"solution-1\"], \"timestamp\": \"2018-07-25 11:38:40.276000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-239 | 0.4080 | (solution-1)\"}, {\"id\": \"SAL-239\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"best\", \"metric\": 0.408, \"running_time\": 19, \"owner\": \"czakon\", \"tags\": [\"solution-1\"], \"timestamp\": \"2018-07-25 11:38:40.276000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-239 | 0.4080 | (solution-1)\"}, {\"id\": \"SAL-239\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"actual\", \"metric\": 0.408, \"running_time\": 19, \"owner\": \"czakon\", \"tags\": [\"solution-1\"], \"timestamp\": \"2018-07-25 11:38:40.276000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-239 | 0.4080 | (solution-1)\"}, {\"id\": \"SAL-239\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"best\", \"metric\": 0.408, \"running_time\": 19, \"owner\": \"czakon\", \"tags\": [\"solution-1\"], \"timestamp\": \"2018-07-25 11:38:40.276000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-239 | 0.4080 | (solution-1)\"}, {\"id\": \"SAL-243\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"actual\", \"metric\": 0.408, \"running_time\": 18, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:39:30.062000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-243 | 0.4080 | ()\"}, {\"id\": \"SAL-243\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"best\", \"metric\": 0.408, \"running_time\": 18, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:39:30.062000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-243 | 0.4080 | ()\"}, {\"id\": \"SAL-243\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"actual\", \"metric\": 0.408, \"running_time\": 18, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:39:30.062000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-243 | 0.4080 | ()\"}, {\"id\": \"SAL-243\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"best\", \"metric\": 0.408, \"running_time\": 18, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:39:30.062000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-243 | 0.4080 | ()\"}, {\"id\": \"SAL-245\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"actual\", \"metric\": 0.38650000000000007, \"running_time\": 18, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:39:49.099000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-245 | 0.3865 | ()\"}, {\"id\": \"SAL-245\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"best\", \"metric\": 0.408, \"running_time\": 18, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:39:49.099000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-245 | 0.3865 | ()\"}, {\"id\": \"SAL-245\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"actual\", \"metric\": 0.38650000000000007, \"running_time\": 18, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:39:49.099000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-245 | 0.3865 | ()\"}, {\"id\": \"SAL-245\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"best\", \"metric\": 0.408, \"running_time\": 18, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:39:49.099000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-245 | 0.3865 | ()\"}, {\"id\": \"SAL-242\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"actual\", \"metric\": 0.40525, \"running_time\": 17, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:40:07.287000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-242 | 0.4052 | ()\"}, {\"id\": \"SAL-242\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"best\", \"metric\": 0.408, \"running_time\": 17, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:40:07.287000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-242 | 0.4052 | ()\"}, {\"id\": \"SAL-242\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"actual\", \"metric\": 0.40525, \"running_time\": 17, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:40:07.287000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-242 | 0.4052 | ()\"}, {\"id\": \"SAL-242\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"best\", \"metric\": 0.408, \"running_time\": 17, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:40:07.287000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-242 | 0.4052 | ()\"}, {\"id\": \"SAL-241\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"actual\", \"metric\": 0.41325, \"running_time\": 17, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:40:25.338000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-241 | 0.4133 | ()\"}, {\"id\": \"SAL-241\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"best\", \"metric\": 0.41325, \"running_time\": 17, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:40:25.338000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-241 | 0.4133 | ()\"}, {\"id\": \"SAL-241\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"actual\", \"metric\": 0.41325, \"running_time\": 17, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:40:25.338000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-241 | 0.4133 | ()\"}, {\"id\": \"SAL-241\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"best\", \"metric\": 0.41325, \"running_time\": 17, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:40:25.338000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-241 | 0.4133 | ()\"}, {\"id\": \"SAL-246\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"actual\", \"metric\": 0.35891666666666666, \"running_time\": 19, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:40:44.848000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-246 | 0.3589 | ()\"}, {\"id\": \"SAL-246\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"best\", \"metric\": 0.41325, \"running_time\": 19, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:40:44.848000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-246 | 0.3589 | ()\"}, {\"id\": \"SAL-246\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"actual\", \"metric\": 0.35891666666666666, \"running_time\": 19, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:40:44.848000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-246 | 0.3589 | ()\"}, {\"id\": \"SAL-246\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"best\", \"metric\": 0.41325, \"running_time\": 19, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:40:44.848000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-246 | 0.3589 | ()\"}, {\"id\": \"SAL-244\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"actual\", \"metric\": 0.4, \"running_time\": 18, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:41:04.074000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-244 | 0.4000 | ()\"}, {\"id\": \"SAL-244\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"best\", \"metric\": 0.41325, \"running_time\": 18, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:41:04.074000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-244 | 0.4000 | ()\"}, {\"id\": \"SAL-244\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"actual\", \"metric\": 0.4, \"running_time\": 18, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:41:04.074000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-244 | 0.4000 | ()\"}, {\"id\": \"SAL-244\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"best\", \"metric\": 0.41325, \"running_time\": 18, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:41:04.074000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-244 | 0.4000 | ()\"}, {\"id\": \"SAL-240\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"actual\", \"metric\": 0.39275, \"running_time\": 17, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:41:21.879000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-240 | 0.3927 | ()\"}, {\"id\": \"SAL-240\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"best\", \"metric\": 0.41325, \"running_time\": 17, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:41:21.879000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-240 | 0.3927 | ()\"}, {\"id\": \"SAL-240\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"actual\", \"metric\": 0.39275, \"running_time\": 17, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:41:21.879000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-240 | 0.3927 | ()\"}, {\"id\": \"SAL-240\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"best\", \"metric\": 0.41325, \"running_time\": 17, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 11:41:21.879000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-240 | 0.3927 | ()\"}, {\"id\": \"SAL-250\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"actual\", \"metric\": 0.41325, \"running_time\": 18, \"owner\": \"czakon\", \"tags\": [\"solution-1\"], \"timestamp\": \"2018-07-25 12:01:20.502000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-250 | 0.4133 | (solution-1)\"}, {\"id\": \"SAL-250\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"best\", \"metric\": 0.41325, \"running_time\": 18, \"owner\": \"czakon\", \"tags\": [\"solution-1\"], \"timestamp\": \"2018-07-25 12:01:20.502000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-250 | 0.4133 | (solution-1)\"}, {\"id\": \"SAL-250\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"actual\", \"metric\": 0.41325, \"running_time\": 18, \"owner\": \"czakon\", \"tags\": [\"solution-1\"], \"timestamp\": \"2018-07-25 12:01:20.502000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-250 | 0.4133 | (solution-1)\"}, {\"id\": \"SAL-250\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"best\", \"metric\": 0.41325, \"running_time\": 18, \"owner\": \"czakon\", \"tags\": [\"solution-1\"], \"timestamp\": \"2018-07-25 12:01:20.502000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-250 | 0.4133 | (solution-1)\"}, {\"id\": \"SAL-251\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"actual\", \"metric\": 0.43325, \"running_time\": 17, \"owner\": \"czakon\", \"tags\": [\"solution-1\"], \"timestamp\": \"2018-07-25 12:01:47.911000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-251 | 0.4333 | (solution-1)\"}, {\"id\": \"SAL-251\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 17, \"owner\": \"czakon\", \"tags\": [\"solution-1\"], \"timestamp\": \"2018-07-25 12:01:47.911000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-251 | 0.4333 | (solution-1)\"}, {\"id\": \"SAL-251\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"actual\", \"metric\": 0.43325, \"running_time\": 17, \"owner\": \"czakon\", \"tags\": [\"solution-1\"], \"timestamp\": \"2018-07-25 12:01:47.911000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-251 | 0.4333 | (solution-1)\"}, {\"id\": \"SAL-251\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 17, \"owner\": \"czakon\", \"tags\": [\"solution-1\"], \"timestamp\": \"2018-07-25 12:01:47.911000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-251 | 0.4333 | (solution-1)\"}, {\"id\": \"SAL-262\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"actual\", \"metric\": 0.41950000000000004, \"running_time\": 22, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:42:00.253000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-262 | 0.4195 | ()\"}, {\"id\": \"SAL-262\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 22, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:42:00.253000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-262 | 0.4195 | ()\"}, {\"id\": \"SAL-262\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"actual\", \"metric\": 0.41950000000000004, \"running_time\": 22, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:42:00.253000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-262 | 0.4195 | ()\"}, {\"id\": \"SAL-262\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 22, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:42:00.253000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-262 | 0.4195 | ()\"}, {\"id\": \"SAL-259\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"actual\", \"metric\": 0.36458333333333326, \"running_time\": 21, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:42:22.440000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-259 | 0.3646 | ()\"}, {\"id\": \"SAL-259\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 21, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:42:22.440000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-259 | 0.3646 | ()\"}, {\"id\": \"SAL-259\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"actual\", \"metric\": 0.36458333333333326, \"running_time\": 21, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:42:22.440000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-259 | 0.3646 | ()\"}, {\"id\": \"SAL-259\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 21, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:42:22.440000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-259 | 0.3646 | ()\"}, {\"id\": \"SAL-264\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"actual\", \"metric\": 0.40474999999999994, \"running_time\": 22, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:42:44.882000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-264 | 0.4047 | ()\"}, {\"id\": \"SAL-264\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 22, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:42:44.882000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-264 | 0.4047 | ()\"}, {\"id\": \"SAL-264\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"actual\", \"metric\": 0.40474999999999994, \"running_time\": 22, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:42:44.882000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-264 | 0.4047 | ()\"}, {\"id\": \"SAL-264\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 22, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:42:44.882000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-264 | 0.4047 | ()\"}, {\"id\": \"SAL-260\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"actual\", \"metric\": 0.4068333333333334, \"running_time\": 22, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:43:07.856000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-260 | 0.4068 | ()\"}, {\"id\": \"SAL-260\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 22, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:43:07.856000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-260 | 0.4068 | ()\"}, {\"id\": \"SAL-260\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"actual\", \"metric\": 0.4068333333333334, \"running_time\": 22, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:43:07.856000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-260 | 0.4068 | ()\"}, {\"id\": \"SAL-260\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 22, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:43:07.856000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-260 | 0.4068 | ()\"}, {\"id\": \"SAL-263\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"actual\", \"metric\": 0.4159999999999999, \"running_time\": 22, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:43:30.374000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-263 | 0.4160 | ()\"}, {\"id\": \"SAL-263\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 22, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:43:30.374000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-263 | 0.4160 | ()\"}, {\"id\": \"SAL-263\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"actual\", \"metric\": 0.4159999999999999, \"running_time\": 22, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:43:30.374000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-263 | 0.4160 | ()\"}, {\"id\": \"SAL-263\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 22, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:43:30.374000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-263 | 0.4160 | ()\"}, {\"id\": \"SAL-261\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"actual\", \"metric\": 0.41129999999999994, \"running_time\": 21, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:43:52.523000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-261 | 0.4113 | ()\"}, {\"id\": \"SAL-261\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 21, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:43:52.523000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-261 | 0.4113 | ()\"}, {\"id\": \"SAL-261\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"actual\", \"metric\": 0.41129999999999994, \"running_time\": 21, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:43:52.523000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-261 | 0.4113 | ()\"}, {\"id\": \"SAL-261\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 21, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-25 21:43:52.523000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-261 | 0.4113 | ()\"}, {\"id\": \"SAL-266\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"actual\", \"metric\": 0.41950000000000004, \"running_time\": 250, \"owner\": \"czakon\", \"tags\": [\"solution-2\"], \"timestamp\": \"2018-07-25 21:49:36.705000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-266 | 0.4195 | (solution-2)\"}, {\"id\": \"SAL-266\", \"resource\": \"experiment_count_day\", \"time_or_count\": 17.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 250, \"owner\": \"czakon\", \"tags\": [\"solution-2\"], \"timestamp\": \"2018-07-25 21:49:36.705000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-266 | 0.4195 | (solution-2)\"}, {\"id\": \"SAL-266\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"actual\", \"metric\": 0.41950000000000004, \"running_time\": 250, \"owner\": \"czakon\", \"tags\": [\"solution-2\"], \"timestamp\": \"2018-07-25 21:49:36.705000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-266 | 0.4195 | (solution-2)\"}, {\"id\": \"SAL-266\", \"resource\": \"running_time_day\", \"time_or_count\": 0.155, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 250, \"owner\": \"czakon\", \"tags\": [\"solution-2\"], \"timestamp\": \"2018-07-25 21:49:36.705000+00:00\", \"timestamp_day\": \"2018-07-25\", \"text\": \"SAL-266 | 0.4195 | (solution-2)\"}, {\"id\": \"SAL-292\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"actual\", \"metric\": 0.40545833333333337, \"running_time\": 3399, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-26 12:33:03.217000+00:00\", \"timestamp_day\": \"2018-07-26\", \"text\": \"SAL-292 | 0.4055 | ()\"}, {\"id\": \"SAL-292\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 3399, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-26 12:33:03.217000+00:00\", \"timestamp_day\": \"2018-07-26\", \"text\": \"SAL-292 | 0.4055 | ()\"}, {\"id\": \"SAL-292\", \"resource\": \"running_time_day\", \"time_or_count\": 2.6355555555555554, \"actual_or_best\": \"actual\", \"metric\": 0.40545833333333337, \"running_time\": 3399, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-26 12:33:03.217000+00:00\", \"timestamp_day\": \"2018-07-26\", \"text\": \"SAL-292 | 0.4055 | ()\"}, {\"id\": \"SAL-292\", \"resource\": \"running_time_day\", \"time_or_count\": 2.6355555555555554, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 3399, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-26 12:33:03.217000+00:00\", \"timestamp_day\": \"2018-07-26\", \"text\": \"SAL-292 | 0.4055 | ()\"}, {\"id\": \"SAL-290\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"actual\", \"metric\": 0.4175, \"running_time\": 2380, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-26 13:12:44.223000+00:00\", \"timestamp_day\": \"2018-07-26\", \"text\": \"SAL-290 | 0.4175 | ()\"}, {\"id\": \"SAL-290\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 2380, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-26 13:12:44.223000+00:00\", \"timestamp_day\": \"2018-07-26\", \"text\": \"SAL-290 | 0.4175 | ()\"}, {\"id\": \"SAL-290\", \"resource\": \"running_time_day\", \"time_or_count\": 2.6355555555555554, \"actual_or_best\": \"actual\", \"metric\": 0.4175, \"running_time\": 2380, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-26 13:12:44.223000+00:00\", \"timestamp_day\": \"2018-07-26\", \"text\": \"SAL-290 | 0.4175 | ()\"}, {\"id\": \"SAL-290\", \"resource\": \"running_time_day\", \"time_or_count\": 2.6355555555555554, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 2380, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-26 13:12:44.223000+00:00\", \"timestamp_day\": \"2018-07-26\", \"text\": \"SAL-290 | 0.4175 | ()\"}, {\"id\": \"SAL-278\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"actual\", \"metric\": 0.42599999999999993, \"running_time\": 3709, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-26 14:14:33.661000+00:00\", \"timestamp_day\": \"2018-07-26\", \"text\": \"SAL-278 | 0.4260 | ()\"}, {\"id\": \"SAL-278\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 3709, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-26 14:14:33.661000+00:00\", \"timestamp_day\": \"2018-07-26\", \"text\": \"SAL-278 | 0.4260 | ()\"}, {\"id\": \"SAL-278\", \"resource\": \"running_time_day\", \"time_or_count\": 2.6355555555555554, \"actual_or_best\": \"actual\", \"metric\": 0.42599999999999993, \"running_time\": 3709, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-26 14:14:33.661000+00:00\", \"timestamp_day\": \"2018-07-26\", \"text\": \"SAL-278 | 0.4260 | ()\"}, {\"id\": \"SAL-278\", \"resource\": \"running_time_day\", \"time_or_count\": 2.6355555555555554, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 3709, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-26 14:14:33.661000+00:00\", \"timestamp_day\": \"2018-07-26\", \"text\": \"SAL-278 | 0.4260 | ()\"}, {\"id\": \"SAL-393\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"actual\", \"metric\": 0.4197083333333333, \"running_time\": 5284, \"owner\": \"czakon\", \"tags\": [\"solution-2\"], \"timestamp\": \"2018-07-27 17:41:09.108000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-393 | 0.4197 | (solution-2)\"}, {\"id\": \"SAL-393\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 5284, \"owner\": \"czakon\", \"tags\": [\"solution-2\"], \"timestamp\": \"2018-07-27 17:41:09.108000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-393 | 0.4197 | (solution-2)\"}, {\"id\": \"SAL-393\", \"resource\": \"running_time_day\", \"time_or_count\": 1.5863888888888888, \"actual_or_best\": \"actual\", \"metric\": 0.4197083333333333, \"running_time\": 5284, \"owner\": \"czakon\", \"tags\": [\"solution-2\"], \"timestamp\": \"2018-07-27 17:41:09.108000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-393 | 0.4197 | (solution-2)\"}, {\"id\": \"SAL-393\", \"resource\": \"running_time_day\", \"time_or_count\": 1.5863888888888888, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 5284, \"owner\": \"czakon\", \"tags\": [\"solution-2\"], \"timestamp\": \"2018-07-27 17:41:09.108000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-393 | 0.4197 | (solution-2)\"}, {\"id\": \"SAL-399\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"actual\", \"metric\": 0.419275, \"running_time\": 31, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-27 17:58:48.675000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-399 | 0.4193 | ()\"}, {\"id\": \"SAL-399\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 31, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-27 17:58:48.675000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-399 | 0.4193 | ()\"}, {\"id\": \"SAL-399\", \"resource\": \"running_time_day\", \"time_or_count\": 1.5863888888888888, \"actual_or_best\": \"actual\", \"metric\": 0.419275, \"running_time\": 31, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-27 17:58:48.675000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-399 | 0.4193 | ()\"}, {\"id\": \"SAL-399\", \"resource\": \"running_time_day\", \"time_or_count\": 1.5863888888888888, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 31, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-27 17:58:48.675000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-399 | 0.4193 | ()\"}, {\"id\": \"SAL-397\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"actual\", \"metric\": 0.41670833333333335, \"running_time\": 29, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-27 17:59:18.778000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-397 | 0.4167 | ()\"}, {\"id\": \"SAL-397\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 29, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-27 17:59:18.778000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-397 | 0.4167 | ()\"}, {\"id\": \"SAL-397\", \"resource\": \"running_time_day\", \"time_or_count\": 1.5863888888888888, \"actual_or_best\": \"actual\", \"metric\": 0.41670833333333335, \"running_time\": 29, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-27 17:59:18.778000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-397 | 0.4167 | ()\"}, {\"id\": \"SAL-397\", \"resource\": \"running_time_day\", \"time_or_count\": 1.5863888888888888, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 29, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-27 17:59:18.778000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-397 | 0.4167 | ()\"}, {\"id\": \"SAL-394\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"actual\", \"metric\": 0.3900208333333333, \"running_time\": 30, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-27 17:59:49.598000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-394 | 0.3900 | ()\"}, {\"id\": \"SAL-394\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 30, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-27 17:59:49.598000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-394 | 0.3900 | ()\"}, {\"id\": \"SAL-394\", \"resource\": \"running_time_day\", \"time_or_count\": 1.5863888888888888, \"actual_or_best\": \"actual\", \"metric\": 0.3900208333333333, \"running_time\": 30, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-27 17:59:49.598000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-394 | 0.3900 | ()\"}, {\"id\": \"SAL-394\", \"resource\": \"running_time_day\", \"time_or_count\": 1.5863888888888888, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 30, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-27 17:59:49.598000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-394 | 0.3900 | ()\"}, {\"id\": \"SAL-398\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"actual\", \"metric\": 0.4197083333333333, \"running_time\": 30, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-27 18:00:20.864000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-398 | 0.4197 | ()\"}, {\"id\": \"SAL-398\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 30, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-27 18:00:20.864000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-398 | 0.4197 | ()\"}, {\"id\": \"SAL-398\", \"resource\": \"running_time_day\", \"time_or_count\": 1.5863888888888888, \"actual_or_best\": \"actual\", \"metric\": 0.4197083333333333, \"running_time\": 30, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-27 18:00:20.864000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-398 | 0.4197 | ()\"}, {\"id\": \"SAL-398\", \"resource\": \"running_time_day\", \"time_or_count\": 1.5863888888888888, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 30, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-27 18:00:20.864000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-398 | 0.4197 | ()\"}, {\"id\": \"SAL-400\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"actual\", \"metric\": 0.41850000000000004, \"running_time\": 31, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-27 18:00:52.314000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-400 | 0.4185 | ()\"}, {\"id\": \"SAL-400\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 31, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-27 18:00:52.314000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-400 | 0.4185 | ()\"}, {\"id\": \"SAL-400\", \"resource\": \"running_time_day\", \"time_or_count\": 1.5863888888888888, \"actual_or_best\": \"actual\", \"metric\": 0.41850000000000004, \"running_time\": 31, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-27 18:00:52.314000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-400 | 0.4185 | ()\"}, {\"id\": \"SAL-400\", \"resource\": \"running_time_day\", \"time_or_count\": 1.5863888888888888, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 31, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-27 18:00:52.314000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-400 | 0.4185 | ()\"}, {\"id\": \"SAL-401\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"actual\", \"metric\": 0.4197083333333333, \"running_time\": 276, \"owner\": \"czakon\", \"tags\": [\"solution-2\"], \"timestamp\": \"2018-07-27 18:06:06.443000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-401 | 0.4197 | (solution-2)\"}, {\"id\": \"SAL-401\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 276, \"owner\": \"czakon\", \"tags\": [\"solution-2\"], \"timestamp\": \"2018-07-27 18:06:06.443000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-401 | 0.4197 | (solution-2)\"}, {\"id\": \"SAL-401\", \"resource\": \"running_time_day\", \"time_or_count\": 1.5863888888888888, \"actual_or_best\": \"actual\", \"metric\": 0.4197083333333333, \"running_time\": 276, \"owner\": \"czakon\", \"tags\": [\"solution-2\"], \"timestamp\": \"2018-07-27 18:06:06.443000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-401 | 0.4197 | (solution-2)\"}, {\"id\": \"SAL-401\", \"resource\": \"running_time_day\", \"time_or_count\": 1.5863888888888888, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 276, \"owner\": \"czakon\", \"tags\": [\"solution-2\"], \"timestamp\": \"2018-07-27 18:06:06.443000+00:00\", \"timestamp_day\": \"2018-07-27\", \"text\": \"SAL-401 | 0.4197 | (solution-2)\"}, {\"id\": \"SAL-407\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"actual\", \"metric\": 0.3721458333333334, \"running_time\": 21046, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 00:41:19.468000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-407 | 0.3721 | ()\"}, {\"id\": \"SAL-407\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 21046, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 00:41:19.468000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-407 | 0.3721 | ()\"}, {\"id\": \"SAL-407\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"actual\", \"metric\": 0.3721458333333334, \"running_time\": 21046, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 00:41:19.468000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-407 | 0.3721 | ()\"}, {\"id\": \"SAL-407\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 21046, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 00:41:19.468000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-407 | 0.3721 | ()\"}, {\"id\": \"SAL-409\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"actual\", \"metric\": 0.39862500000000006, \"running_time\": 7548, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 02:47:08.542000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-409 | 0.3986 | ()\"}, {\"id\": \"SAL-409\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 7548, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 02:47:08.542000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-409 | 0.3986 | ()\"}, {\"id\": \"SAL-409\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"actual\", \"metric\": 0.39862500000000006, \"running_time\": 7548, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 02:47:08.542000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-409 | 0.3986 | ()\"}, {\"id\": \"SAL-409\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 7548, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 02:47:08.542000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-409 | 0.3986 | ()\"}, {\"id\": \"SAL-408\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"actual\", \"metric\": 0.39487500000000003, \"running_time\": 11224, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 05:54:12.987000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-408 | 0.3949 | ()\"}, {\"id\": \"SAL-408\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 11224, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 05:54:12.987000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-408 | 0.3949 | ()\"}, {\"id\": \"SAL-408\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"actual\", \"metric\": 0.39487500000000003, \"running_time\": 11224, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 05:54:12.987000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-408 | 0.3949 | ()\"}, {\"id\": \"SAL-408\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 11224, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 05:54:12.987000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-408 | 0.3949 | ()\"}, {\"id\": \"SAL-410\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"actual\", \"metric\": 0.40700000000000003, \"running_time\": 7477, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 07:58:51.058000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-410 | 0.4070 | ()\"}, {\"id\": \"SAL-410\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 7477, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 07:58:51.058000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-410 | 0.4070 | ()\"}, {\"id\": \"SAL-410\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"actual\", \"metric\": 0.40700000000000003, \"running_time\": 7477, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 07:58:51.058000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-410 | 0.4070 | ()\"}, {\"id\": \"SAL-410\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 7477, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 07:58:51.058000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-410 | 0.4070 | ()\"}, {\"id\": \"SAL-411\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"actual\", \"metric\": 0.4165, \"running_time\": 7296, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 10:00:27.666000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-411 | 0.4165 | ()\"}, {\"id\": \"SAL-411\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 7296, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 10:00:27.666000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-411 | 0.4165 | ()\"}, {\"id\": \"SAL-411\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"actual\", \"metric\": 0.4165, \"running_time\": 7296, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 10:00:27.666000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-411 | 0.4165 | ()\"}, {\"id\": \"SAL-411\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"best\", \"metric\": 0.43325, \"running_time\": 7296, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 10:00:27.666000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-411 | 0.4165 | ()\"}, {\"id\": \"SAL-417\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"actual\", \"metric\": 0.7645000000000001, \"running_time\": 27, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-28 12:14:51.255000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-417 | 0.7645 | (solution-2 , open)\"}, {\"id\": \"SAL-417\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"best\", \"metric\": 0.7645000000000001, \"running_time\": 27, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-28 12:14:51.255000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-417 | 0.7645 | (solution-2 , open)\"}, {\"id\": \"SAL-417\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"actual\", \"metric\": 0.7645000000000001, \"running_time\": 27, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-28 12:14:51.255000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-417 | 0.7645 | (solution-2 , open)\"}, {\"id\": \"SAL-417\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"best\", \"metric\": 0.7645000000000001, \"running_time\": 27, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-28 12:14:51.255000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-417 | 0.7645 | (solution-2 , open)\"}, {\"id\": \"SAL-426\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"actual\", \"metric\": 0.7447499999999999, \"running_time\": 4139, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 14:05:12.635000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-426 | 0.7447 | ()\"}, {\"id\": \"SAL-426\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"best\", \"metric\": 0.7645000000000001, \"running_time\": 4139, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 14:05:12.635000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-426 | 0.7447 | ()\"}, {\"id\": \"SAL-426\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"actual\", \"metric\": 0.7447499999999999, \"running_time\": 4139, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 14:05:12.635000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-426 | 0.7447 | ()\"}, {\"id\": \"SAL-426\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"best\", \"metric\": 0.7645000000000001, \"running_time\": 4139, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 14:05:12.635000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-426 | 0.7447 | ()\"}, {\"id\": \"SAL-422\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"actual\", \"metric\": 0.7265, \"running_time\": 3448, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 15:02:41.792000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-422 | 0.7265 | ()\"}, {\"id\": \"SAL-422\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"best\", \"metric\": 0.7645000000000001, \"running_time\": 3448, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 15:02:41.792000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-422 | 0.7265 | ()\"}, {\"id\": \"SAL-422\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"actual\", \"metric\": 0.7265, \"running_time\": 3448, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 15:02:41.792000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-422 | 0.7265 | ()\"}, {\"id\": \"SAL-422\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"best\", \"metric\": 0.7645000000000001, \"running_time\": 3448, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 15:02:41.792000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-422 | 0.7265 | ()\"}, {\"id\": \"SAL-421\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"actual\", \"metric\": 0.73575, \"running_time\": 2642, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 15:46:44.681000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-421 | 0.7358 | ()\"}, {\"id\": \"SAL-421\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"best\", \"metric\": 0.7645000000000001, \"running_time\": 2642, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 15:46:44.681000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-421 | 0.7358 | ()\"}, {\"id\": \"SAL-421\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"actual\", \"metric\": 0.73575, \"running_time\": 2642, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 15:46:44.681000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-421 | 0.7358 | ()\"}, {\"id\": \"SAL-421\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"best\", \"metric\": 0.7645000000000001, \"running_time\": 2642, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 15:46:44.681000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-421 | 0.7358 | ()\"}, {\"id\": \"SAL-425\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"actual\", \"metric\": 0.7387500000000001, \"running_time\": 4306, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 16:58:31.821000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-425 | 0.7388 | ()\"}, {\"id\": \"SAL-425\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"best\", \"metric\": 0.7645000000000001, \"running_time\": 4306, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 16:58:31.821000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-425 | 0.7388 | ()\"}, {\"id\": \"SAL-425\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"actual\", \"metric\": 0.7387500000000001, \"running_time\": 4306, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 16:58:31.821000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-425 | 0.7388 | ()\"}, {\"id\": \"SAL-425\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"best\", \"metric\": 0.7645000000000001, \"running_time\": 4306, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 16:58:31.821000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-425 | 0.7388 | ()\"}, {\"id\": \"SAL-424\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"actual\", \"metric\": 0.7397500000000001, \"running_time\": 4387, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 18:11:39.604000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-424 | 0.7398 | ()\"}, {\"id\": \"SAL-424\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"best\", \"metric\": 0.7645000000000001, \"running_time\": 4387, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 18:11:39.604000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-424 | 0.7398 | ()\"}, {\"id\": \"SAL-424\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"actual\", \"metric\": 0.7397500000000001, \"running_time\": 4387, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 18:11:39.604000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-424 | 0.7398 | ()\"}, {\"id\": \"SAL-424\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"best\", \"metric\": 0.7645000000000001, \"running_time\": 4387, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 18:11:39.604000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-424 | 0.7398 | ()\"}, {\"id\": \"SAL-423\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"actual\", \"metric\": 0.7254999999999999, \"running_time\": 3972, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 19:17:52.079000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-423 | 0.7255 | ()\"}, {\"id\": \"SAL-423\", \"resource\": \"experiment_count_day\", \"time_or_count\": 12.0, \"actual_or_best\": \"best\", \"metric\": 0.7645000000000001, \"running_time\": 3972, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 19:17:52.079000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-423 | 0.7255 | ()\"}, {\"id\": \"SAL-423\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"actual\", \"metric\": 0.7254999999999999, \"running_time\": 3972, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 19:17:52.079000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-423 | 0.7255 | ()\"}, {\"id\": \"SAL-423\", \"resource\": \"running_time_day\", \"time_or_count\": 21.531111111111112, \"actual_or_best\": \"best\", \"metric\": 0.7645000000000001, \"running_time\": 3972, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-28 19:17:52.079000+00:00\", \"timestamp_day\": \"2018-07-28\", \"text\": \"SAL-423 | 0.7255 | ()\"}, {\"id\": \"SAL-437\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"actual\", \"metric\": 0.6757500000000001, \"running_time\": 4938, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 01:44:23.818000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-437 | 0.6758 | ()\"}, {\"id\": \"SAL-437\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"best\", \"metric\": 0.7645000000000001, \"running_time\": 4938, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 01:44:23.818000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-437 | 0.6758 | ()\"}, {\"id\": \"SAL-437\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"actual\", \"metric\": 0.6757500000000001, \"running_time\": 4938, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 01:44:23.818000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-437 | 0.6758 | ()\"}, {\"id\": \"SAL-437\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"best\", \"metric\": 0.7645000000000001, \"running_time\": 4938, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 01:44:23.818000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-437 | 0.6758 | ()\"}, {\"id\": \"SAL-438\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"actual\", \"metric\": 0.774, \"running_time\": 10230, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 04:34:54.177000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-438 | 0.7740 | ()\"}, {\"id\": \"SAL-438\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"best\", \"metric\": 0.774, \"running_time\": 10230, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 04:34:54.177000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-438 | 0.7740 | ()\"}, {\"id\": \"SAL-438\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"actual\", \"metric\": 0.774, \"running_time\": 10230, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 04:34:54.177000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-438 | 0.7740 | ()\"}, {\"id\": \"SAL-438\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"best\", \"metric\": 0.774, \"running_time\": 10230, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 04:34:54.177000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-438 | 0.7740 | ()\"}, {\"id\": \"SAL-439\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"actual\", \"metric\": 0.7830000000000001, \"running_time\": 5383, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 06:04:37.990000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-439 | 0.7830 | ()\"}, {\"id\": \"SAL-439\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"best\", \"metric\": 0.7830000000000001, \"running_time\": 5383, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 06:04:37.990000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-439 | 0.7830 | ()\"}, {\"id\": \"SAL-439\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"actual\", \"metric\": 0.7830000000000001, \"running_time\": 5383, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 06:04:37.990000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-439 | 0.7830 | ()\"}, {\"id\": \"SAL-439\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"best\", \"metric\": 0.7830000000000001, \"running_time\": 5383, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 06:04:37.990000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-439 | 0.7830 | ()\"}, {\"id\": \"SAL-442\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"actual\", \"metric\": 0.7739999999999999, \"running_time\": 4987, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 07:27:45.939000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-442 | 0.7740 | ()\"}, {\"id\": \"SAL-442\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"best\", \"metric\": 0.7830000000000001, \"running_time\": 4987, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 07:27:45.939000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-442 | 0.7740 | ()\"}, {\"id\": \"SAL-442\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"actual\", \"metric\": 0.7739999999999999, \"running_time\": 4987, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 07:27:45.939000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-442 | 0.7740 | ()\"}, {\"id\": \"SAL-442\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"best\", \"metric\": 0.7830000000000001, \"running_time\": 4987, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 07:27:45.939000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-442 | 0.7740 | ()\"}, {\"id\": \"SAL-436\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"actual\", \"metric\": 0.7452500000000001, \"running_time\": 5001, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 08:51:08.153000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-436 | 0.7453 | ()\"}, {\"id\": \"SAL-436\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"best\", \"metric\": 0.7830000000000001, \"running_time\": 5001, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 08:51:08.153000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-436 | 0.7453 | ()\"}, {\"id\": \"SAL-436\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"actual\", \"metric\": 0.7452500000000001, \"running_time\": 5001, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 08:51:08.153000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-436 | 0.7453 | ()\"}, {\"id\": \"SAL-436\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"best\", \"metric\": 0.7830000000000001, \"running_time\": 5001, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 08:51:08.153000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-436 | 0.7453 | ()\"}, {\"id\": \"SAL-440\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"actual\", \"metric\": 0.7830000000000001, \"running_time\": 6543, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 10:40:11.882000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-440 | 0.7830 | ()\"}, {\"id\": \"SAL-440\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"best\", \"metric\": 0.7830000000000001, \"running_time\": 6543, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 10:40:11.882000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-440 | 0.7830 | ()\"}, {\"id\": \"SAL-440\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"actual\", \"metric\": 0.7830000000000001, \"running_time\": 6543, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 10:40:11.882000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-440 | 0.7830 | ()\"}, {\"id\": \"SAL-440\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"best\", \"metric\": 0.7830000000000001, \"running_time\": 6543, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 10:40:11.882000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-440 | 0.7830 | ()\"}, {\"id\": \"SAL-443\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"actual\", \"metric\": 0.7797499999999999, \"running_time\": 6096, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-29 12:51:24.230000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-443 | 0.7797 | (solution-2 , open)\"}, {\"id\": \"SAL-443\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"best\", \"metric\": 0.7830000000000001, \"running_time\": 6096, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-29 12:51:24.230000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-443 | 0.7797 | (solution-2 , open)\"}, {\"id\": \"SAL-443\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"actual\", \"metric\": 0.7797499999999999, \"running_time\": 6096, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-29 12:51:24.230000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-443 | 0.7797 | (solution-2 , open)\"}, {\"id\": \"SAL-443\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"best\", \"metric\": 0.7830000000000001, \"running_time\": 6096, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-29 12:51:24.230000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-443 | 0.7797 | (solution-2 , open)\"}, {\"id\": \"SAL-469\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"actual\", \"metric\": 0.77, \"running_time\": 45, \"owner\": \"czakon\", \"tags\": [\"solution-2\"], \"timestamp\": \"2018-07-29 13:26:42.482000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-469 | 0.7700 | (solution-2)\"}, {\"id\": \"SAL-469\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"best\", \"metric\": 0.7830000000000001, \"running_time\": 45, \"owner\": \"czakon\", \"tags\": [\"solution-2\"], \"timestamp\": \"2018-07-29 13:26:42.482000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-469 | 0.7700 | (solution-2)\"}, {\"id\": \"SAL-469\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"actual\", \"metric\": 0.77, \"running_time\": 45, \"owner\": \"czakon\", \"tags\": [\"solution-2\"], \"timestamp\": \"2018-07-29 13:26:42.482000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-469 | 0.7700 | (solution-2)\"}, {\"id\": \"SAL-469\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"best\", \"metric\": 0.7830000000000001, \"running_time\": 45, \"owner\": \"czakon\", \"tags\": [\"solution-2\"], \"timestamp\": \"2018-07-29 13:26:42.482000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-469 | 0.7700 | (solution-2)\"}, {\"id\": \"SAL-485\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"actual\", \"metric\": 0.79625, \"running_time\": 4092, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 17:13:10.369000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-485 | 0.7963 | ()\"}, {\"id\": \"SAL-485\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"best\", \"metric\": 0.79625, \"running_time\": 4092, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 17:13:10.369000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-485 | 0.7963 | ()\"}, {\"id\": \"SAL-485\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"actual\", \"metric\": 0.79625, \"running_time\": 4092, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 17:13:10.369000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-485 | 0.7963 | ()\"}, {\"id\": \"SAL-485\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"best\", \"metric\": 0.79625, \"running_time\": 4092, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 17:13:10.369000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-485 | 0.7963 | ()\"}, {\"id\": \"SAL-483\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"actual\", \"metric\": 0.7830000000000001, \"running_time\": 3135, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 18:05:26.301000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-483 | 0.7830 | ()\"}, {\"id\": \"SAL-483\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"best\", \"metric\": 0.79625, \"running_time\": 3135, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 18:05:26.301000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-483 | 0.7830 | ()\"}, {\"id\": \"SAL-483\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"actual\", \"metric\": 0.7830000000000001, \"running_time\": 3135, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 18:05:26.301000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-483 | 0.7830 | ()\"}, {\"id\": \"SAL-483\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"best\", \"metric\": 0.79625, \"running_time\": 3135, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 18:05:26.301000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-483 | 0.7830 | ()\"}, {\"id\": \"SAL-484\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"actual\", \"metric\": 0.79025, \"running_time\": 5589, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 19:38:36.083000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-484 | 0.7903 | ()\"}, {\"id\": \"SAL-484\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"best\", \"metric\": 0.79625, \"running_time\": 5589, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 19:38:36.083000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-484 | 0.7903 | ()\"}, {\"id\": \"SAL-484\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"actual\", \"metric\": 0.79025, \"running_time\": 5589, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 19:38:36.083000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-484 | 0.7903 | ()\"}, {\"id\": \"SAL-484\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"best\", \"metric\": 0.79625, \"running_time\": 5589, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 19:38:36.083000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-484 | 0.7903 | ()\"}, {\"id\": \"SAL-486\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"actual\", \"metric\": 0.7957500000000001, \"running_time\": 4793, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 20:58:29.800000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-486 | 0.7958 | ()\"}, {\"id\": \"SAL-486\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"best\", \"metric\": 0.79625, \"running_time\": 4793, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 20:58:29.800000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-486 | 0.7958 | ()\"}, {\"id\": \"SAL-486\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"actual\", \"metric\": 0.7957500000000001, \"running_time\": 4793, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 20:58:29.800000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-486 | 0.7958 | ()\"}, {\"id\": \"SAL-486\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"best\", \"metric\": 0.79625, \"running_time\": 4793, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 20:58:29.800000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-486 | 0.7958 | ()\"}, {\"id\": \"SAL-491\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"actual\", \"metric\": 0.7957500000000001, \"running_time\": 263, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-29 21:10:10.239000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-491 | 0.7958 | (solution-2 , open)\"}, {\"id\": \"SAL-491\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"best\", \"metric\": 0.79625, \"running_time\": 263, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-29 21:10:10.239000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-491 | 0.7958 | (solution-2 , open)\"}, {\"id\": \"SAL-491\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"actual\", \"metric\": 0.7957500000000001, \"running_time\": 263, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-29 21:10:10.239000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-491 | 0.7958 | (solution-2 , open)\"}, {\"id\": \"SAL-491\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"best\", \"metric\": 0.79625, \"running_time\": 263, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-29 21:10:10.239000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-491 | 0.7958 | (solution-2 , open)\"}, {\"id\": \"SAL-492\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"actual\", \"metric\": 0.7659999999999999, \"running_time\": 3150, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 22:25:13.091000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-492 | 0.7660 | ()\"}, {\"id\": \"SAL-492\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"best\", \"metric\": 0.79625, \"running_time\": 3150, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 22:25:13.091000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-492 | 0.7660 | ()\"}, {\"id\": \"SAL-492\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"actual\", \"metric\": 0.7659999999999999, \"running_time\": 3150, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 22:25:13.091000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-492 | 0.7660 | ()\"}, {\"id\": \"SAL-492\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"best\", \"metric\": 0.79625, \"running_time\": 3150, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 22:25:13.091000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-492 | 0.7660 | ()\"}, {\"id\": \"SAL-500\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"actual\", \"metric\": 0.7995, \"running_time\": 3982, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 23:31:36.320000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-500 | 0.7995 | ()\"}, {\"id\": \"SAL-500\", \"resource\": \"experiment_count_day\", \"time_or_count\": 15.0, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 3982, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 23:31:36.320000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-500 | 0.7995 | ()\"}, {\"id\": \"SAL-500\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"actual\", \"metric\": 0.7995, \"running_time\": 3982, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 23:31:36.320000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-500 | 0.7995 | ()\"}, {\"id\": \"SAL-500\", \"resource\": \"running_time_day\", \"time_or_count\": 18.951944444444443, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 3982, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-29 23:31:36.320000+00:00\", \"timestamp_day\": \"2018-07-29\", \"text\": \"SAL-500 | 0.7995 | ()\"}, {\"id\": \"SAL-503\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"actual\", \"metric\": 0.79525, \"running_time\": 4800, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 00:51:36.829000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-503 | 0.7953 | ()\"}, {\"id\": \"SAL-503\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 4800, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 00:51:36.829000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-503 | 0.7953 | ()\"}, {\"id\": \"SAL-503\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"actual\", \"metric\": 0.79525, \"running_time\": 4800, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 00:51:36.829000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-503 | 0.7953 | ()\"}, {\"id\": \"SAL-503\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 4800, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 00:51:36.829000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-503 | 0.7953 | ()\"}, {\"id\": \"SAL-494\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"actual\", \"metric\": 0.7715000000000001, \"running_time\": 3182, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 01:44:39.757000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-494 | 0.7715 | ()\"}, {\"id\": \"SAL-494\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 3182, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 01:44:39.757000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-494 | 0.7715 | ()\"}, {\"id\": \"SAL-494\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"actual\", \"metric\": 0.7715000000000001, \"running_time\": 3182, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 01:44:39.757000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-494 | 0.7715 | ()\"}, {\"id\": \"SAL-494\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 3182, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 01:44:39.757000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-494 | 0.7715 | ()\"}, {\"id\": \"SAL-496\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"actual\", \"metric\": 0.7882499999999999, \"running_time\": 6122, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 03:26:43.002000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-496 | 0.7882 | ()\"}, {\"id\": \"SAL-496\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 6122, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 03:26:43.002000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-496 | 0.7882 | ()\"}, {\"id\": \"SAL-496\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"actual\", \"metric\": 0.7882499999999999, \"running_time\": 6122, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 03:26:43.002000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-496 | 0.7882 | ()\"}, {\"id\": \"SAL-496\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 6122, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 03:26:43.002000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-496 | 0.7882 | ()\"}, {\"id\": \"SAL-501\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"actual\", \"metric\": 0.7985, \"running_time\": 4292, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 04:38:15.424000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-501 | 0.7985 | ()\"}, {\"id\": \"SAL-501\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 4292, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 04:38:15.424000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-501 | 0.7985 | ()\"}, {\"id\": \"SAL-501\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"actual\", \"metric\": 0.7985, \"running_time\": 4292, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 04:38:15.424000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-501 | 0.7985 | ()\"}, {\"id\": \"SAL-501\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 4292, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 04:38:15.424000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-501 | 0.7985 | ()\"}, {\"id\": \"SAL-499\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"actual\", \"metric\": 0.7935, \"running_time\": 3738, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 05:40:34.627000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-499 | 0.7935 | ()\"}, {\"id\": \"SAL-499\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 3738, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 05:40:34.627000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-499 | 0.7935 | ()\"}, {\"id\": \"SAL-499\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"actual\", \"metric\": 0.7935, \"running_time\": 3738, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 05:40:34.627000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-499 | 0.7935 | ()\"}, {\"id\": \"SAL-499\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 3738, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 05:40:34.627000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-499 | 0.7935 | ()\"}, {\"id\": \"SAL-504\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"actual\", \"metric\": 0.7939999999999999, \"running_time\": 5680, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-30 07:33:47.935000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-504 | 0.7940 | (solution-2 , open)\"}, {\"id\": \"SAL-504\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 5680, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-30 07:33:47.935000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-504 | 0.7940 | (solution-2 , open)\"}, {\"id\": \"SAL-504\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"actual\", \"metric\": 0.7939999999999999, \"running_time\": 5680, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-30 07:33:47.935000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-504 | 0.7940 | (solution-2 , open)\"}, {\"id\": \"SAL-504\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 5680, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-30 07:33:47.935000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-504 | 0.7940 | (solution-2 , open)\"}, {\"id\": \"SAL-508\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"actual\", \"metric\": 0.7967500000000001, \"running_time\": 5004, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 11:26:14.303000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-508 | 0.7968 | ()\"}, {\"id\": \"SAL-508\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 5004, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 11:26:14.303000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-508 | 0.7968 | ()\"}, {\"id\": \"SAL-508\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"actual\", \"metric\": 0.7967500000000001, \"running_time\": 5004, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 11:26:14.303000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-508 | 0.7968 | ()\"}, {\"id\": \"SAL-508\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 5004, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 11:26:14.303000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-508 | 0.7968 | ()\"}, {\"id\": \"SAL-507\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"actual\", \"metric\": 0.7927500000000001, \"running_time\": 5113, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 12:51:28.272000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-507 | 0.7928 | ()\"}, {\"id\": \"SAL-507\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 5113, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 12:51:28.272000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-507 | 0.7928 | ()\"}, {\"id\": \"SAL-507\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"actual\", \"metric\": 0.7927500000000001, \"running_time\": 5113, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 12:51:28.272000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-507 | 0.7928 | ()\"}, {\"id\": \"SAL-507\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 5113, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 12:51:28.272000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-507 | 0.7928 | ()\"}, {\"id\": \"SAL-509\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"actual\", \"metric\": 0.7955, \"running_time\": 6719, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"no_blur\", \"open\"], \"timestamp\": \"2018-07-30 14:46:58.044000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-509 | 0.7955 | (solution-2 , no_blur , open)\"}, {\"id\": \"SAL-509\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 6719, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"no_blur\", \"open\"], \"timestamp\": \"2018-07-30 14:46:58.044000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-509 | 0.7955 | (solution-2 , no_blur , open)\"}, {\"id\": \"SAL-509\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"actual\", \"metric\": 0.7955, \"running_time\": 6719, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"no_blur\", \"open\"], \"timestamp\": \"2018-07-30 14:46:58.044000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-509 | 0.7955 | (solution-2 , no_blur , open)\"}, {\"id\": \"SAL-509\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 6719, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"no_blur\", \"open\"], \"timestamp\": \"2018-07-30 14:46:58.044000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-509 | 0.7955 | (solution-2 , no_blur , open)\"}, {\"id\": \"SAL-511\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"actual\", \"metric\": 0.7982500000000001, \"running_time\": 5121, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-30 17:38:22.321000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-511 | 0.7983 | (solution-2 , open)\"}, {\"id\": \"SAL-511\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 5121, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-30 17:38:22.321000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-511 | 0.7983 | (solution-2 , open)\"}, {\"id\": \"SAL-511\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"actual\", \"metric\": 0.7982500000000001, \"running_time\": 5121, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-30 17:38:22.321000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-511 | 0.7983 | (solution-2 , open)\"}, {\"id\": \"SAL-511\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"best\", \"metric\": 0.7995, \"running_time\": 5121, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-30 17:38:22.321000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-511 | 0.7983 | (solution-2 , open)\"}, {\"id\": \"SAL-512\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"actual\", \"metric\": 0.8047500000000001, \"running_time\": 5189, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-30 20:07:07.918000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-512 | 0.8048 | (solution-2 , open)\"}, {\"id\": \"SAL-512\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 5189, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-30 20:07:07.918000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-512 | 0.8048 | (solution-2 , open)\"}, {\"id\": \"SAL-512\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"actual\", \"metric\": 0.8047500000000001, \"running_time\": 5189, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-30 20:07:07.918000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-512 | 0.8048 | (solution-2 , open)\"}, {\"id\": \"SAL-512\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 5189, \"owner\": \"czakon\", \"tags\": [\"solution-2\", \"open\"], \"timestamp\": \"2018-07-30 20:07:07.918000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-512 | 0.8048 | (solution-2 , open)\"}, {\"id\": \"SAL-513\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"actual\", \"metric\": 0.7995, \"running_time\": 5686, \"owner\": \"czakon\", \"tags\": [\"solution-2\"], \"timestamp\": \"2018-07-30 21:44:13.205000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-513 | 0.7995 | (solution-2)\"}, {\"id\": \"SAL-513\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 5686, \"owner\": \"czakon\", \"tags\": [\"solution-2\"], \"timestamp\": \"2018-07-30 21:44:13.205000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-513 | 0.7995 | (solution-2)\"}, {\"id\": \"SAL-513\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"actual\", \"metric\": 0.7995, \"running_time\": 5686, \"owner\": \"czakon\", \"tags\": [\"solution-2\"], \"timestamp\": \"2018-07-30 21:44:13.205000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-513 | 0.7995 | (solution-2)\"}, {\"id\": \"SAL-513\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 5686, \"owner\": \"czakon\", \"tags\": [\"solution-2\"], \"timestamp\": \"2018-07-30 21:44:13.205000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-513 | 0.7995 | (solution-2)\"}, {\"id\": \"SAL-518\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"actual\", \"metric\": 0.706, \"running_time\": 3959, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 22:51:40.686000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-518 | 0.7060 | ()\"}, {\"id\": \"SAL-518\", \"resource\": \"experiment_count_day\", \"time_or_count\": 13.0, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 3959, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 22:51:40.686000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-518 | 0.7060 | ()\"}, {\"id\": \"SAL-518\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"actual\", \"metric\": 0.706, \"running_time\": 3959, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 22:51:40.686000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-518 | 0.7060 | ()\"}, {\"id\": \"SAL-518\", \"resource\": \"running_time_day\", \"time_or_count\": 17.945833333333333, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 3959, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-30 22:51:40.686000+00:00\", \"timestamp_day\": \"2018-07-30\", \"text\": \"SAL-518 | 0.7060 | ()\"}, {\"id\": \"SAL-517\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"actual\", \"metric\": 0.706, \"running_time\": 4643, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-31 00:09:04.675000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-517 | 0.7060 | ()\"}, {\"id\": \"SAL-517\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 4643, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-31 00:09:04.675000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-517 | 0.7060 | ()\"}, {\"id\": \"SAL-517\", \"resource\": \"running_time_day\", \"time_or_count\": 5.93, \"actual_or_best\": \"actual\", \"metric\": 0.706, \"running_time\": 4643, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-31 00:09:04.675000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-517 | 0.7060 | ()\"}, {\"id\": \"SAL-517\", \"resource\": \"running_time_day\", \"time_or_count\": 5.93, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 4643, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-31 00:09:04.675000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-517 | 0.7060 | ()\"}, {\"id\": \"SAL-519\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"actual\", \"metric\": 0.7240000000000001, \"running_time\": 3051, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-31 00:59:56.755000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-519 | 0.7240 | ()\"}, {\"id\": \"SAL-519\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 3051, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-31 00:59:56.755000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-519 | 0.7240 | ()\"}, {\"id\": \"SAL-519\", \"resource\": \"running_time_day\", \"time_or_count\": 5.93, \"actual_or_best\": \"actual\", \"metric\": 0.7240000000000001, \"running_time\": 3051, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-31 00:59:56.755000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-519 | 0.7240 | ()\"}, {\"id\": \"SAL-519\", \"resource\": \"running_time_day\", \"time_or_count\": 5.93, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 3051, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-31 00:59:56.755000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-519 | 0.7240 | ()\"}, {\"id\": \"SAL-522\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"actual\", \"metric\": 0.7184999999999999, \"running_time\": 2846, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-31 06:35:43.446000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-522 | 0.7185 | ()\"}, {\"id\": \"SAL-522\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 2846, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-31 06:35:43.446000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-522 | 0.7185 | ()\"}, {\"id\": \"SAL-522\", \"resource\": \"running_time_day\", \"time_or_count\": 5.93, \"actual_or_best\": \"actual\", \"metric\": 0.7184999999999999, \"running_time\": 2846, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-31 06:35:43.446000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-522 | 0.7185 | ()\"}, {\"id\": \"SAL-522\", \"resource\": \"running_time_day\", \"time_or_count\": 5.93, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 2846, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-31 06:35:43.446000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-522 | 0.7185 | ()\"}, {\"id\": \"SAL-553\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"actual\", \"metric\": 0.7977500000000001, \"running_time\": 4117, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-07-31 17:47:06.815000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-553 | 0.7978 | (solution-3)\"}, {\"id\": \"SAL-553\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 4117, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-07-31 17:47:06.815000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-553 | 0.7978 | (solution-3)\"}, {\"id\": \"SAL-553\", \"resource\": \"running_time_day\", \"time_or_count\": 5.93, \"actual_or_best\": \"actual\", \"metric\": 0.7977500000000001, \"running_time\": 4117, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-07-31 17:47:06.815000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-553 | 0.7978 | (solution-3)\"}, {\"id\": \"SAL-553\", \"resource\": \"running_time_day\", \"time_or_count\": 5.93, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 4117, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-07-31 17:47:06.815000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-553 | 0.7978 | (solution-3)\"}, {\"id\": \"SAL-556\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"actual\", \"metric\": 0.74575, \"running_time\": 3013, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-31 22:19:54.349000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-556 | 0.7458 | ()\"}, {\"id\": \"SAL-556\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 3013, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-31 22:19:54.349000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-556 | 0.7458 | ()\"}, {\"id\": \"SAL-556\", \"resource\": \"running_time_day\", \"time_or_count\": 5.93, \"actual_or_best\": \"actual\", \"metric\": 0.74575, \"running_time\": 3013, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-31 22:19:54.349000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-556 | 0.7458 | ()\"}, {\"id\": \"SAL-556\", \"resource\": \"running_time_day\", \"time_or_count\": 5.93, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 3013, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-31 22:19:54.349000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-556 | 0.7458 | ()\"}, {\"id\": \"SAL-555\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"actual\", \"metric\": 0.7979999999999999, \"running_time\": 3678, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-31 23:21:13.332000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-555 | 0.7980 | ()\"}, {\"id\": \"SAL-555\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 3678, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-31 23:21:13.332000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-555 | 0.7980 | ()\"}, {\"id\": \"SAL-555\", \"resource\": \"running_time_day\", \"time_or_count\": 5.93, \"actual_or_best\": \"actual\", \"metric\": 0.7979999999999999, \"running_time\": 3678, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-31 23:21:13.332000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-555 | 0.7980 | ()\"}, {\"id\": \"SAL-555\", \"resource\": \"running_time_day\", \"time_or_count\": 5.93, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 3678, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-07-31 23:21:13.332000+00:00\", \"timestamp_day\": \"2018-07-31\", \"text\": \"SAL-555 | 0.7980 | ()\"}, {\"id\": \"SAL-557\", \"resource\": \"experiment_count_day\", \"time_or_count\": 5.0, \"actual_or_best\": \"actual\", \"metric\": 0.62225, \"running_time\": 5587, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-08-01 00:54:21.491000+00:00\", \"timestamp_day\": \"2018-08-01\", \"text\": \"SAL-557 | 0.6222 | ()\"}, {\"id\": \"SAL-557\", \"resource\": \"experiment_count_day\", \"time_or_count\": 5.0, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 5587, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-08-01 00:54:21.491000+00:00\", \"timestamp_day\": \"2018-08-01\", \"text\": \"SAL-557 | 0.6222 | ()\"}, {\"id\": \"SAL-557\", \"resource\": \"running_time_day\", \"time_or_count\": 5.048055555555556, \"actual_or_best\": \"actual\", \"metric\": 0.62225, \"running_time\": 5587, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-08-01 00:54:21.491000+00:00\", \"timestamp_day\": \"2018-08-01\", \"text\": \"SAL-557 | 0.6222 | ()\"}, {\"id\": \"SAL-557\", \"resource\": \"running_time_day\", \"time_or_count\": 5.048055555555556, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 5587, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-08-01 00:54:21.491000+00:00\", \"timestamp_day\": \"2018-08-01\", \"text\": \"SAL-557 | 0.6222 | ()\"}, {\"id\": \"SAL-554\", \"resource\": \"experiment_count_day\", \"time_or_count\": 5.0, \"actual_or_best\": \"actual\", \"metric\": 0.795, \"running_time\": 2501, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-08-01 01:36:03.182000+00:00\", \"timestamp_day\": \"2018-08-01\", \"text\": \"SAL-554 | 0.7950 | ()\"}, {\"id\": \"SAL-554\", \"resource\": \"experiment_count_day\", \"time_or_count\": 5.0, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 2501, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-08-01 01:36:03.182000+00:00\", \"timestamp_day\": \"2018-08-01\", \"text\": \"SAL-554 | 0.7950 | ()\"}, {\"id\": \"SAL-554\", \"resource\": \"running_time_day\", \"time_or_count\": 5.048055555555556, \"actual_or_best\": \"actual\", \"metric\": 0.795, \"running_time\": 2501, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-08-01 01:36:03.182000+00:00\", \"timestamp_day\": \"2018-08-01\", \"text\": \"SAL-554 | 0.7950 | ()\"}, {\"id\": \"SAL-554\", \"resource\": \"running_time_day\", \"time_or_count\": 5.048055555555556, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 2501, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-08-01 01:36:03.182000+00:00\", \"timestamp_day\": \"2018-08-01\", \"text\": \"SAL-554 | 0.7950 | ()\"}, {\"id\": \"SAL-560\", \"resource\": \"experiment_count_day\", \"time_or_count\": 5.0, \"actual_or_best\": \"actual\", \"metric\": 0.802, \"running_time\": 6246, \"owner\": \"czakon\", \"tags\": [\"solution-3\", \"open\"], \"timestamp\": \"2018-08-01 07:57:38.636000+00:00\", \"timestamp_day\": \"2018-08-01\", \"text\": \"SAL-560 | 0.8020 | (solution-3 , open)\"}, {\"id\": \"SAL-560\", \"resource\": \"experiment_count_day\", \"time_or_count\": 5.0, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 6246, \"owner\": \"czakon\", \"tags\": [\"solution-3\", \"open\"], \"timestamp\": \"2018-08-01 07:57:38.636000+00:00\", \"timestamp_day\": \"2018-08-01\", \"text\": \"SAL-560 | 0.8020 | (solution-3 , open)\"}, {\"id\": \"SAL-560\", \"resource\": \"running_time_day\", \"time_or_count\": 5.048055555555556, \"actual_or_best\": \"actual\", \"metric\": 0.802, \"running_time\": 6246, \"owner\": \"czakon\", \"tags\": [\"solution-3\", \"open\"], \"timestamp\": \"2018-08-01 07:57:38.636000+00:00\", \"timestamp_day\": \"2018-08-01\", \"text\": \"SAL-560 | 0.8020 | (solution-3 , open)\"}, {\"id\": \"SAL-560\", \"resource\": \"running_time_day\", \"time_or_count\": 5.048055555555556, \"actual_or_best\": \"best\", \"metric\": 0.8047500000000001, \"running_time\": 6246, \"owner\": \"czakon\", \"tags\": [\"solution-3\", \"open\"], \"timestamp\": \"2018-08-01 07:57:38.636000+00:00\", \"timestamp_day\": \"2018-08-01\", \"text\": \"SAL-560 | 0.8020 | (solution-3 , open)\"}, {\"id\": \"SAL-562\", \"resource\": \"experiment_count_day\", \"time_or_count\": 5.0, \"actual_or_best\": \"actual\", \"metric\": 0.8069999999999999, \"running_time\": 3663, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-01 17:19:29.111000+00:00\", \"timestamp_day\": \"2018-08-01\", \"text\": \"SAL-562 | 0.8070 | (solution-3)\"}, {\"id\": \"SAL-562\", \"resource\": \"experiment_count_day\", \"time_or_count\": 5.0, \"actual_or_best\": \"best\", \"metric\": 0.8069999999999999, \"running_time\": 3663, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-01 17:19:29.111000+00:00\", \"timestamp_day\": \"2018-08-01\", \"text\": \"SAL-562 | 0.8070 | (solution-3)\"}, {\"id\": \"SAL-562\", \"resource\": \"running_time_day\", \"time_or_count\": 5.048055555555556, \"actual_or_best\": \"actual\", \"metric\": 0.8069999999999999, \"running_time\": 3663, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-01 17:19:29.111000+00:00\", \"timestamp_day\": \"2018-08-01\", \"text\": \"SAL-562 | 0.8070 | (solution-3)\"}, {\"id\": \"SAL-562\", \"resource\": \"running_time_day\", \"time_or_count\": 5.048055555555556, \"actual_or_best\": \"best\", \"metric\": 0.8069999999999999, \"running_time\": 3663, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-01 17:19:29.111000+00:00\", \"timestamp_day\": \"2018-08-01\", \"text\": \"SAL-562 | 0.8070 | (solution-3)\"}, {\"id\": \"SAL-563\", \"resource\": \"experiment_count_day\", \"time_or_count\": 5.0, \"actual_or_best\": \"actual\", \"metric\": 0.8069999999999999, \"running_time\": 176, \"owner\": \"czakon\", \"tags\": [\"solution-3\", \"open\"], \"timestamp\": \"2018-08-01 19:09:45.491000+00:00\", \"timestamp_day\": \"2018-08-01\", \"text\": \"SAL-563 | 0.8070 | (solution-3 , open)\"}, {\"id\": \"SAL-563\", \"resource\": \"experiment_count_day\", \"time_or_count\": 5.0, \"actual_or_best\": \"best\", \"metric\": 0.8069999999999999, \"running_time\": 176, \"owner\": \"czakon\", \"tags\": [\"solution-3\", \"open\"], \"timestamp\": \"2018-08-01 19:09:45.491000+00:00\", \"timestamp_day\": \"2018-08-01\", \"text\": \"SAL-563 | 0.8070 | (solution-3 , open)\"}, {\"id\": \"SAL-563\", \"resource\": \"running_time_day\", \"time_or_count\": 5.048055555555556, \"actual_or_best\": \"actual\", \"metric\": 0.8069999999999999, \"running_time\": 176, \"owner\": \"czakon\", \"tags\": [\"solution-3\", \"open\"], \"timestamp\": \"2018-08-01 19:09:45.491000+00:00\", \"timestamp_day\": \"2018-08-01\", \"text\": \"SAL-563 | 0.8070 | (solution-3 , open)\"}, {\"id\": \"SAL-563\", \"resource\": \"running_time_day\", \"time_or_count\": 5.048055555555556, \"actual_or_best\": \"best\", \"metric\": 0.8069999999999999, \"running_time\": 176, \"owner\": \"czakon\", \"tags\": [\"solution-3\", \"open\"], \"timestamp\": \"2018-08-01 19:09:45.491000+00:00\", \"timestamp_day\": \"2018-08-01\", \"text\": \"SAL-563 | 0.8070 | (solution-3 , open)\"}, {\"id\": \"SAL-564\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"actual\", \"metric\": 0.7885, \"running_time\": 4863, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-04 19:41:59.285000+00:00\", \"timestamp_day\": \"2018-08-04\", \"text\": \"SAL-564 | 0.7885 | (solution-3)\"}, {\"id\": \"SAL-564\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"best\", \"metric\": 0.8069999999999999, \"running_time\": 4863, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-04 19:41:59.285000+00:00\", \"timestamp_day\": \"2018-08-04\", \"text\": \"SAL-564 | 0.7885 | (solution-3)\"}, {\"id\": \"SAL-564\", \"resource\": \"running_time_day\", \"time_or_count\": 5.151944444444444, \"actual_or_best\": \"actual\", \"metric\": 0.7885, \"running_time\": 4863, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-04 19:41:59.285000+00:00\", \"timestamp_day\": \"2018-08-04\", \"text\": \"SAL-564 | 0.7885 | (solution-3)\"}, {\"id\": \"SAL-564\", \"resource\": \"running_time_day\", \"time_or_count\": 5.151944444444444, \"actual_or_best\": \"best\", \"metric\": 0.8069999999999999, \"running_time\": 4863, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-04 19:41:59.285000+00:00\", \"timestamp_day\": \"2018-08-04\", \"text\": \"SAL-564 | 0.7885 | (solution-3)\"}, {\"id\": \"SAL-567\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"actual\", \"metric\": 0.7902499999999999, \"running_time\": 3362, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-04 20:03:43.886000+00:00\", \"timestamp_day\": \"2018-08-04\", \"text\": \"SAL-567 | 0.7902 | (solution-3)\"}, {\"id\": \"SAL-567\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"best\", \"metric\": 0.8069999999999999, \"running_time\": 3362, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-04 20:03:43.886000+00:00\", \"timestamp_day\": \"2018-08-04\", \"text\": \"SAL-567 | 0.7902 | (solution-3)\"}, {\"id\": \"SAL-567\", \"resource\": \"running_time_day\", \"time_or_count\": 5.151944444444444, \"actual_or_best\": \"actual\", \"metric\": 0.7902499999999999, \"running_time\": 3362, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-04 20:03:43.886000+00:00\", \"timestamp_day\": \"2018-08-04\", \"text\": \"SAL-567 | 0.7902 | (solution-3)\"}, {\"id\": \"SAL-567\", \"resource\": \"running_time_day\", \"time_or_count\": 5.151944444444444, \"actual_or_best\": \"best\", \"metric\": 0.8069999999999999, \"running_time\": 3362, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-04 20:03:43.886000+00:00\", \"timestamp_day\": \"2018-08-04\", \"text\": \"SAL-567 | 0.7902 | (solution-3)\"}, {\"id\": \"SAL-568\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"actual\", \"metric\": 0.8077500000000001, \"running_time\": 5006, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-04 22:07:54.177000+00:00\", \"timestamp_day\": \"2018-08-04\", \"text\": \"SAL-568 | 0.8078 | (solution-3)\"}, {\"id\": \"SAL-568\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"best\", \"metric\": 0.8077500000000001, \"running_time\": 5006, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-04 22:07:54.177000+00:00\", \"timestamp_day\": \"2018-08-04\", \"text\": \"SAL-568 | 0.8078 | (solution-3)\"}, {\"id\": \"SAL-568\", \"resource\": \"running_time_day\", \"time_or_count\": 5.151944444444444, \"actual_or_best\": \"actual\", \"metric\": 0.8077500000000001, \"running_time\": 5006, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-04 22:07:54.177000+00:00\", \"timestamp_day\": \"2018-08-04\", \"text\": \"SAL-568 | 0.8078 | (solution-3)\"}, {\"id\": \"SAL-568\", \"resource\": \"running_time_day\", \"time_or_count\": 5.151944444444444, \"actual_or_best\": \"best\", \"metric\": 0.8077500000000001, \"running_time\": 5006, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-04 22:07:54.177000+00:00\", \"timestamp_day\": \"2018-08-04\", \"text\": \"SAL-568 | 0.8078 | (solution-3)\"}, {\"id\": \"SAL-569\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"actual\", \"metric\": 0.8082499999999999, \"running_time\": 5316, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-04 23:45:21.476000+00:00\", \"timestamp_day\": \"2018-08-04\", \"text\": \"SAL-569 | 0.8082 | (solution-3)\"}, {\"id\": \"SAL-569\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 5316, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-04 23:45:21.476000+00:00\", \"timestamp_day\": \"2018-08-04\", \"text\": \"SAL-569 | 0.8082 | (solution-3)\"}, {\"id\": \"SAL-569\", \"resource\": \"running_time_day\", \"time_or_count\": 5.151944444444444, \"actual_or_best\": \"actual\", \"metric\": 0.8082499999999999, \"running_time\": 5316, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-04 23:45:21.476000+00:00\", \"timestamp_day\": \"2018-08-04\", \"text\": \"SAL-569 | 0.8082 | (solution-3)\"}, {\"id\": \"SAL-569\", \"resource\": \"running_time_day\", \"time_or_count\": 5.151944444444444, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 5316, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-04 23:45:21.476000+00:00\", \"timestamp_day\": \"2018-08-04\", \"text\": \"SAL-569 | 0.8082 | (solution-3)\"}, {\"id\": \"SAL-570\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"actual\", \"metric\": 0.7982499999999999, \"running_time\": 3181, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-05 12:37:49.968000+00:00\", \"timestamp_day\": \"2018-08-05\", \"text\": \"SAL-570 | 0.7982 | (solution-3)\"}, {\"id\": \"SAL-570\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 3181, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-05 12:37:49.968000+00:00\", \"timestamp_day\": \"2018-08-05\", \"text\": \"SAL-570 | 0.7982 | (solution-3)\"}, {\"id\": \"SAL-570\", \"resource\": \"running_time_day\", \"time_or_count\": 0.8836111111111111, \"actual_or_best\": \"actual\", \"metric\": 0.7982499999999999, \"running_time\": 3181, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-05 12:37:49.968000+00:00\", \"timestamp_day\": \"2018-08-05\", \"text\": \"SAL-570 | 0.7982 | (solution-3)\"}, {\"id\": \"SAL-570\", \"resource\": \"running_time_day\", \"time_or_count\": 0.8836111111111111, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 3181, \"owner\": \"czakon\", \"tags\": [\"solution-3\"], \"timestamp\": \"2018-08-05 12:37:49.968000+00:00\", \"timestamp_day\": \"2018-08-05\", \"text\": \"SAL-570 | 0.7982 | (solution-3)\"}, {\"id\": \"SAL-584\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"actual\", \"metric\": 0.591, \"running_time\": 256, \"owner\": \"czakon\", \"tags\": [\"solution-1\"], \"timestamp\": \"2018-08-22 12:55:39.612000+00:00\", \"timestamp_day\": \"2018-08-22\", \"text\": \"SAL-584 | 0.5910 | (solution-1)\"}, {\"id\": \"SAL-584\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 256, \"owner\": \"czakon\", \"tags\": [\"solution-1\"], \"timestamp\": \"2018-08-22 12:55:39.612000+00:00\", \"timestamp_day\": \"2018-08-22\", \"text\": \"SAL-584 | 0.5910 | (solution-1)\"}, {\"id\": \"SAL-584\", \"resource\": \"running_time_day\", \"time_or_count\": 0.07111111111111111, \"actual_or_best\": \"actual\", \"metric\": 0.591, \"running_time\": 256, \"owner\": \"czakon\", \"tags\": [\"solution-1\"], \"timestamp\": \"2018-08-22 12:55:39.612000+00:00\", \"timestamp_day\": \"2018-08-22\", \"text\": \"SAL-584 | 0.5910 | (solution-1)\"}, {\"id\": \"SAL-584\", \"resource\": \"running_time_day\", \"time_or_count\": 0.07111111111111111, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 256, \"owner\": \"czakon\", \"tags\": [\"solution-1\"], \"timestamp\": \"2018-08-22 12:55:39.612000+00:00\", \"timestamp_day\": \"2018-08-22\", \"text\": \"SAL-584 | 0.5910 | (solution-1)\"}, {\"id\": \"SAL-625\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"actual\", \"metric\": 0.801519060289675, \"running_time\": 27835, \"owner\": \"czakon\", \"tags\": [\"solution-4\", \"open\"], \"timestamp\": \"2018-08-29 00:25:06.339000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-625 | 0.8015 | (solution-4 , open)\"}, {\"id\": \"SAL-625\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 27835, \"owner\": \"czakon\", \"tags\": [\"solution-4\", \"open\"], \"timestamp\": \"2018-08-29 00:25:06.339000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-625 | 0.8015 | (solution-4 , open)\"}, {\"id\": \"SAL-625\", \"resource\": \"running_time_day\", \"time_or_count\": 17.494444444444444, \"actual_or_best\": \"actual\", \"metric\": 0.801519060289675, \"running_time\": 27835, \"owner\": \"czakon\", \"tags\": [\"solution-4\", \"open\"], \"timestamp\": \"2018-08-29 00:25:06.339000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-625 | 0.8015 | (solution-4 , open)\"}, {\"id\": \"SAL-625\", \"resource\": \"running_time_day\", \"time_or_count\": 17.494444444444444, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 27835, \"owner\": \"czakon\", \"tags\": [\"solution-4\", \"open\"], \"timestamp\": \"2018-08-29 00:25:06.339000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-625 | 0.8015 | (solution-4 , open)\"}, {\"id\": \"SAL-635\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"actual\", \"metric\": 0.7920955063009035, \"running_time\": 311, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 09:41:32.045000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-635 | 0.7921 | (solution-5 , open)\"}, {\"id\": \"SAL-635\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 311, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 09:41:32.045000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-635 | 0.7921 | (solution-5 , open)\"}, {\"id\": \"SAL-635\", \"resource\": \"running_time_day\", \"time_or_count\": 17.494444444444444, \"actual_or_best\": \"actual\", \"metric\": 0.7920955063009035, \"running_time\": 311, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 09:41:32.045000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-635 | 0.7921 | (solution-5 , open)\"}, {\"id\": \"SAL-635\", \"resource\": \"running_time_day\", \"time_or_count\": 17.494444444444444, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 311, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 09:41:32.045000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-635 | 0.7921 | (solution-5 , open)\"}, {\"id\": \"SAL-636\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"actual\", \"metric\": 0.8039692991342168, \"running_time\": 152, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 09:47:04.881000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-636 | 0.8040 | (solution-5 , open)\"}, {\"id\": \"SAL-636\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 152, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 09:47:04.881000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-636 | 0.8040 | (solution-5 , open)\"}, {\"id\": \"SAL-636\", \"resource\": \"running_time_day\", \"time_or_count\": 17.494444444444444, \"actual_or_best\": \"actual\", \"metric\": 0.8039692991342168, \"running_time\": 152, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 09:47:04.881000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-636 | 0.8040 | (solution-5 , open)\"}, {\"id\": \"SAL-636\", \"resource\": \"running_time_day\", \"time_or_count\": 17.494444444444444, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 152, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 09:47:04.881000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-636 | 0.8040 | (solution-5 , open)\"}, {\"id\": \"SAL-643\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"actual\", \"metric\": 0.8029189609399504, \"running_time\": 215, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 10:11:19.441000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-643 | 0.8029 | (solution-5 , open)\"}, {\"id\": \"SAL-643\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 215, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 10:11:19.441000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-643 | 0.8029 | (solution-5 , open)\"}, {\"id\": \"SAL-643\", \"resource\": \"running_time_day\", \"time_or_count\": 17.494444444444444, \"actual_or_best\": \"actual\", \"metric\": 0.8029189609399504, \"running_time\": 215, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 10:11:19.441000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-643 | 0.8029 | (solution-5 , open)\"}, {\"id\": \"SAL-643\", \"resource\": \"running_time_day\", \"time_or_count\": 17.494444444444444, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 215, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 10:11:19.441000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-643 | 0.8029 | (solution-5 , open)\"}, {\"id\": \"SAL-644\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"actual\", \"metric\": 0.800520047783916, \"running_time\": 223, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 10:17:15.182000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-644 | 0.8005 | (solution-5 , open)\"}, {\"id\": \"SAL-644\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 223, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 10:17:15.182000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-644 | 0.8005 | (solution-5 , open)\"}, {\"id\": \"SAL-644\", \"resource\": \"running_time_day\", \"time_or_count\": 17.494444444444444, \"actual_or_best\": \"actual\", \"metric\": 0.800520047783916, \"running_time\": 223, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 10:17:15.182000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-644 | 0.8005 | (solution-5 , open)\"}, {\"id\": \"SAL-644\", \"resource\": \"running_time_day\", \"time_or_count\": 17.494444444444444, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 223, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 10:17:15.182000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-644 | 0.8005 | (solution-5 , open)\"}, {\"id\": \"SAL-646\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"actual\", \"metric\": 0.8009691100395747, \"running_time\": 217, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 10:25:46.056000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-646 | 0.8010 | (solution-5 , open)\"}, {\"id\": \"SAL-646\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 217, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 10:25:46.056000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-646 | 0.8010 | (solution-5 , open)\"}, {\"id\": \"SAL-646\", \"resource\": \"running_time_day\", \"time_or_count\": 17.494444444444444, \"actual_or_best\": \"actual\", \"metric\": 0.8009691100395747, \"running_time\": 217, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 10:25:46.056000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-646 | 0.8010 | (solution-5 , open)\"}, {\"id\": \"SAL-646\", \"resource\": \"running_time_day\", \"time_or_count\": 17.494444444444444, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 217, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 10:25:46.056000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-646 | 0.8010 | (solution-5 , open)\"}, {\"id\": \"SAL-654\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"actual\", \"metric\": 0.8039692991342168, \"running_time\": 2286, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 11:23:03.787000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-654 | 0.8040 | (solution-5 , open)\"}, {\"id\": \"SAL-654\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 2286, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 11:23:03.787000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-654 | 0.8040 | (solution-5 , open)\"}, {\"id\": \"SAL-654\", \"resource\": \"running_time_day\", \"time_or_count\": 17.494444444444444, \"actual_or_best\": \"actual\", \"metric\": 0.8039692991342168, \"running_time\": 2286, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 11:23:03.787000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-654 | 0.8040 | (solution-5 , open)\"}, {\"id\": \"SAL-654\", \"resource\": \"running_time_day\", \"time_or_count\": 17.494444444444444, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 2286, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 11:23:03.787000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-654 | 0.8040 | (solution-5 , open)\"}, {\"id\": \"SAL-660\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"actual\", \"metric\": 0.8039692991342168, \"running_time\": 2106, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 14:58:22.851000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-660 | 0.8040 | (solution-5 , open)\"}, {\"id\": \"SAL-660\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 2106, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 14:58:22.851000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-660 | 0.8040 | (solution-5 , open)\"}, {\"id\": \"SAL-660\", \"resource\": \"running_time_day\", \"time_or_count\": 17.494444444444444, \"actual_or_best\": \"actual\", \"metric\": 0.8039692991342168, \"running_time\": 2106, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 14:58:22.851000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-660 | 0.8040 | (solution-5 , open)\"}, {\"id\": \"SAL-660\", \"resource\": \"running_time_day\", \"time_or_count\": 17.494444444444444, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 2106, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 14:58:22.851000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-660 | 0.8040 | (solution-5 , open)\"}, {\"id\": \"SAL-663\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"actual\", \"metric\": 0.8038203120661892, \"running_time\": 29635, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 23:31:44.694000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-663 | 0.8038 | (solution-5 , open)\"}, {\"id\": \"SAL-663\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 29635, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 23:31:44.694000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-663 | 0.8038 | (solution-5 , open)\"}, {\"id\": \"SAL-663\", \"resource\": \"running_time_day\", \"time_or_count\": 17.494444444444444, \"actual_or_best\": \"actual\", \"metric\": 0.8038203120661892, \"running_time\": 29635, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 23:31:44.694000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-663 | 0.8038 | (solution-5 , open)\"}, {\"id\": \"SAL-663\", \"resource\": \"running_time_day\", \"time_or_count\": 17.494444444444444, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 29635, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-29 23:31:44.694000+00:00\", \"timestamp_day\": \"2018-08-29\", \"text\": \"SAL-663 | 0.8038 | (solution-5 , open)\"}, {\"id\": \"SAL-664\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.8021685478582031, \"running_time\": 30230, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 00:43:33.632000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-664 | 0.8022 | (solution-5 , open)\"}, {\"id\": \"SAL-664\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 30230, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 00:43:33.632000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-664 | 0.8022 | (solution-5 , open)\"}, {\"id\": \"SAL-664\", \"resource\": \"running_time_day\", \"time_or_count\": 59.153055555555554, \"actual_or_best\": \"actual\", \"metric\": 0.8021685478582031, \"running_time\": 30230, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 00:43:33.632000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-664 | 0.8022 | (solution-5 , open)\"}, {\"id\": \"SAL-664\", \"resource\": \"running_time_day\", \"time_or_count\": 59.153055555555554, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 30230, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 00:43:33.632000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-664 | 0.8022 | (solution-5 , open)\"}, {\"id\": \"SAL-668\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.7991942092017054, \"running_time\": 25043, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 03:45:21.516000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-668 | 0.7992 | (solution-5 , open)\"}, {\"id\": \"SAL-668\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 25043, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 03:45:21.516000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-668 | 0.7992 | (solution-5 , open)\"}, {\"id\": \"SAL-668\", \"resource\": \"running_time_day\", \"time_or_count\": 59.153055555555554, \"actual_or_best\": \"actual\", \"metric\": 0.7991942092017054, \"running_time\": 25043, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 03:45:21.516000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-668 | 0.7992 | (solution-5 , open)\"}, {\"id\": \"SAL-668\", \"resource\": \"running_time_day\", \"time_or_count\": 59.153055555555554, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 25043, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 03:45:21.516000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-668 | 0.7992 | (solution-5 , open)\"}, {\"id\": \"SAL-671\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.8036445365905635, \"running_time\": 2259, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 05:57:00.221000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-671 | 0.8036 | (solution-5 , open)\"}, {\"id\": \"SAL-671\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 2259, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 05:57:00.221000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-671 | 0.8036 | (solution-5 , open)\"}, {\"id\": \"SAL-671\", \"resource\": \"running_time_day\", \"time_or_count\": 59.153055555555554, \"actual_or_best\": \"actual\", \"metric\": 0.8036445365905635, \"running_time\": 2259, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 05:57:00.221000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-671 | 0.8036 | (solution-5 , open)\"}, {\"id\": \"SAL-671\", \"resource\": \"running_time_day\", \"time_or_count\": 59.153055555555554, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 2259, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 05:57:00.221000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-671 | 0.8036 | (solution-5 , open)\"}, {\"id\": \"SAL-665\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.7923936305120712, \"running_time\": 36352, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 06:50:36.762000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-665 | 0.7924 | (solution-5 , open)\"}, {\"id\": \"SAL-665\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 36352, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 06:50:36.762000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-665 | 0.7924 | (solution-5 , open)\"}, {\"id\": \"SAL-665\", \"resource\": \"running_time_day\", \"time_or_count\": 59.153055555555554, \"actual_or_best\": \"actual\", \"metric\": 0.7923936305120712, \"running_time\": 36352, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 06:50:36.762000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-665 | 0.7924 | (solution-5 , open)\"}, {\"id\": \"SAL-665\", \"resource\": \"running_time_day\", \"time_or_count\": 59.153055555555554, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 36352, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 06:50:36.762000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-665 | 0.7924 | (solution-5 , open)\"}, {\"id\": \"SAL-670\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.8007702229966097, \"running_time\": 23788, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 11:53:56.851000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-670 | 0.8008 | (solution-5 , open)\"}, {\"id\": \"SAL-670\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 23788, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 11:53:56.851000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-670 | 0.8008 | (solution-5 , open)\"}, {\"id\": \"SAL-670\", \"resource\": \"running_time_day\", \"time_or_count\": 59.153055555555554, \"actual_or_best\": \"actual\", \"metric\": 0.8007702229966097, \"running_time\": 23788, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 11:53:56.851000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-670 | 0.8008 | (solution-5 , open)\"}, {\"id\": \"SAL-670\", \"resource\": \"running_time_day\", \"time_or_count\": 59.153055555555554, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 23788, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 11:53:56.851000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-670 | 0.8008 | (solution-5 , open)\"}, {\"id\": \"SAL-669\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.7918174546360456, \"running_time\": 30836, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 13:50:17.498000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-669 | 0.7918 | (solution-5 , open)\"}, {\"id\": \"SAL-669\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 30836, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 13:50:17.498000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-669 | 0.7918 | (solution-5 , open)\"}, {\"id\": \"SAL-669\", \"resource\": \"running_time_day\", \"time_or_count\": 59.153055555555554, \"actual_or_best\": \"actual\", \"metric\": 0.7918174546360456, \"running_time\": 30836, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 13:50:17.498000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-669 | 0.7918 | (solution-5 , open)\"}, {\"id\": \"SAL-669\", \"resource\": \"running_time_day\", \"time_or_count\": 59.153055555555554, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 30836, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 13:50:17.498000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-669 | 0.7918 | (solution-5 , open)\"}, {\"id\": \"SAL-693\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.34269659464562013, \"running_time\": 8404, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 16:16:42.061000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-693 | 0.3427 | (solution-5 , open)\"}, {\"id\": \"SAL-693\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 8404, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 16:16:42.061000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-693 | 0.3427 | (solution-5 , open)\"}, {\"id\": \"SAL-693\", \"resource\": \"running_time_day\", \"time_or_count\": 59.153055555555554, \"actual_or_best\": \"actual\", \"metric\": 0.34269659464562013, \"running_time\": 8404, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 16:16:42.061000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-693 | 0.3427 | (solution-5 , open)\"}, {\"id\": \"SAL-693\", \"resource\": \"running_time_day\", \"time_or_count\": 59.153055555555554, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 8404, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 16:16:42.061000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-693 | 0.3427 | (solution-5 , open)\"}, {\"id\": \"SAL-695\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.34269659464562013, \"running_time\": 8449, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 16:24:34.590000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-695 | 0.3427 | (solution-5 , open)\"}, {\"id\": \"SAL-695\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 8449, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 16:24:34.590000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-695 | 0.3427 | (solution-5 , open)\"}, {\"id\": \"SAL-695\", \"resource\": \"running_time_day\", \"time_or_count\": 59.153055555555554, \"actual_or_best\": \"actual\", \"metric\": 0.34269659464562013, \"running_time\": 8449, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 16:24:34.590000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-695 | 0.3427 | (solution-5 , open)\"}, {\"id\": \"SAL-695\", \"resource\": \"running_time_day\", \"time_or_count\": 59.153055555555554, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 8449, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 16:24:34.590000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-695 | 0.3427 | (solution-5 , open)\"}, {\"id\": \"SAL-672\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.8000939470204838, \"running_time\": 27582, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 19:22:11.489000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-672 | 0.8001 | (solution-5 , open)\"}, {\"id\": \"SAL-672\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 27582, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 19:22:11.489000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-672 | 0.8001 | (solution-5 , open)\"}, {\"id\": \"SAL-672\", \"resource\": \"running_time_day\", \"time_or_count\": 59.153055555555554, \"actual_or_best\": \"actual\", \"metric\": 0.8000939470204838, \"running_time\": 27582, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 19:22:11.489000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-672 | 0.8001 | (solution-5 , open)\"}, {\"id\": \"SAL-672\", \"resource\": \"running_time_day\", \"time_or_count\": 59.153055555555554, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 27582, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-30 19:22:11.489000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-672 | 0.8001 | (solution-5 , open)\"}, {\"id\": \"SAL-718\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.7327421874648262, \"running_time\": 20008, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"192\", \"open\"], \"timestamp\": \"2018-08-30 22:47:27.071000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-718 | 0.7327 | (solution-5 , 192 , open)\"}, {\"id\": \"SAL-718\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 20008, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"192\", \"open\"], \"timestamp\": \"2018-08-30 22:47:27.071000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-718 | 0.7327 | (solution-5 , 192 , open)\"}, {\"id\": \"SAL-718\", \"resource\": \"running_time_day\", \"time_or_count\": 59.153055555555554, \"actual_or_best\": \"actual\", \"metric\": 0.7327421874648262, \"running_time\": 20008, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"192\", \"open\"], \"timestamp\": \"2018-08-30 22:47:27.071000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-718 | 0.7327 | (solution-5 , 192 , open)\"}, {\"id\": \"SAL-718\", \"resource\": \"running_time_day\", \"time_or_count\": 59.153055555555554, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 20008, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"192\", \"open\"], \"timestamp\": \"2018-08-30 22:47:27.071000+00:00\", \"timestamp_day\": \"2018-08-30\", \"text\": \"SAL-718 | 0.7327 | (solution-5 , 192 , open)\"}, {\"id\": \"SAL-714\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"actual\", \"metric\": 0.6304527165846506, \"running_time\": 30747, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-31 00:33:31.011000+00:00\", \"timestamp_day\": \"2018-08-31\", \"text\": \"SAL-714 | 0.6305 | (solution-5 , open)\"}, {\"id\": \"SAL-714\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 30747, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-31 00:33:31.011000+00:00\", \"timestamp_day\": \"2018-08-31\", \"text\": \"SAL-714 | 0.6305 | (solution-5 , open)\"}, {\"id\": \"SAL-714\", \"resource\": \"running_time_day\", \"time_or_count\": 19.461111111111112, \"actual_or_best\": \"actual\", \"metric\": 0.6304527165846506, \"running_time\": 30747, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-31 00:33:31.011000+00:00\", \"timestamp_day\": \"2018-08-31\", \"text\": \"SAL-714 | 0.6305 | (solution-5 , open)\"}, {\"id\": \"SAL-714\", \"resource\": \"running_time_day\", \"time_or_count\": 19.461111111111112, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 30747, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-31 00:33:31.011000+00:00\", \"timestamp_day\": \"2018-08-31\", \"text\": \"SAL-714 | 0.6305 | (solution-5 , open)\"}, {\"id\": \"SAL-706\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"actual\", \"metric\": 0.6502480741611176, \"running_time\": 39313, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-31 01:39:54.355000+00:00\", \"timestamp_day\": \"2018-08-31\", \"text\": \"SAL-706 | 0.6502 | (solution-5 , open)\"}, {\"id\": \"SAL-706\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 39313, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-31 01:39:54.355000+00:00\", \"timestamp_day\": \"2018-08-31\", \"text\": \"SAL-706 | 0.6502 | (solution-5 , open)\"}, {\"id\": \"SAL-706\", \"resource\": \"running_time_day\", \"time_or_count\": 19.461111111111112, \"actual_or_best\": \"actual\", \"metric\": 0.6502480741611176, \"running_time\": 39313, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-31 01:39:54.355000+00:00\", \"timestamp_day\": \"2018-08-31\", \"text\": \"SAL-706 | 0.6502 | (solution-5 , open)\"}, {\"id\": \"SAL-706\", \"resource\": \"running_time_day\", \"time_or_count\": 19.461111111111112, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 39313, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-08-31 01:39:54.355000+00:00\", \"timestamp_day\": \"2018-08-31\", \"text\": \"SAL-706 | 0.6502 | (solution-5 , open)\"}, {\"id\": \"SAL-756\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"actual\", \"metric\": 0.6570629850240045, \"running_time\": 34757, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-01 00:39:03.195000+00:00\", \"timestamp_day\": \"2018-09-01\", \"text\": \"SAL-756 | 0.6571 | (solution-5 , open)\"}, {\"id\": \"SAL-756\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 34757, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-01 00:39:03.195000+00:00\", \"timestamp_day\": \"2018-09-01\", \"text\": \"SAL-756 | 0.6571 | (solution-5 , open)\"}, {\"id\": \"SAL-756\", \"resource\": \"running_time_day\", \"time_or_count\": 30.308611111111112, \"actual_or_best\": \"actual\", \"metric\": 0.6570629850240045, \"running_time\": 34757, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-01 00:39:03.195000+00:00\", \"timestamp_day\": \"2018-09-01\", \"text\": \"SAL-756 | 0.6571 | (solution-5 , open)\"}, {\"id\": \"SAL-756\", \"resource\": \"running_time_day\", \"time_or_count\": 30.308611111111112, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 34757, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-01 00:39:03.195000+00:00\", \"timestamp_day\": \"2018-09-01\", \"text\": \"SAL-756 | 0.6571 | (solution-5 , open)\"}, {\"id\": \"SAL-743\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"actual\", \"metric\": 0.726540295918107, \"running_time\": 41933, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-01 01:00:11.538000+00:00\", \"timestamp_day\": \"2018-09-01\", \"text\": \"SAL-743 | 0.7265 | (solution-5 , open)\"}, {\"id\": \"SAL-743\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 41933, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-01 01:00:11.538000+00:00\", \"timestamp_day\": \"2018-09-01\", \"text\": \"SAL-743 | 0.7265 | (solution-5 , open)\"}, {\"id\": \"SAL-743\", \"resource\": \"running_time_day\", \"time_or_count\": 30.308611111111112, \"actual_or_best\": \"actual\", \"metric\": 0.726540295918107, \"running_time\": 41933, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-01 01:00:11.538000+00:00\", \"timestamp_day\": \"2018-09-01\", \"text\": \"SAL-743 | 0.7265 | (solution-5 , open)\"}, {\"id\": \"SAL-743\", \"resource\": \"running_time_day\", \"time_or_count\": 30.308611111111112, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 41933, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-01 01:00:11.538000+00:00\", \"timestamp_day\": \"2018-09-01\", \"text\": \"SAL-743 | 0.7265 | (solution-5 , open)\"}, {\"id\": \"SAL-770\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"actual\", \"metric\": 0.7969174196685441, \"running_time\": 32421, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-01 21:10:43.091000+00:00\", \"timestamp_day\": \"2018-09-01\", \"text\": \"SAL-770 | 0.7969 | (solution-5 , open)\"}, {\"id\": \"SAL-770\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 32421, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-01 21:10:43.091000+00:00\", \"timestamp_day\": \"2018-09-01\", \"text\": \"SAL-770 | 0.7969 | (solution-5 , open)\"}, {\"id\": \"SAL-770\", \"resource\": \"running_time_day\", \"time_or_count\": 30.308611111111112, \"actual_or_best\": \"actual\", \"metric\": 0.7969174196685441, \"running_time\": 32421, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-01 21:10:43.091000+00:00\", \"timestamp_day\": \"2018-09-01\", \"text\": \"SAL-770 | 0.7969 | (solution-5 , open)\"}, {\"id\": \"SAL-770\", \"resource\": \"running_time_day\", \"time_or_count\": 30.308611111111112, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 32421, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-01 21:10:43.091000+00:00\", \"timestamp_day\": \"2018-09-01\", \"text\": \"SAL-770 | 0.7969 | (solution-5 , open)\"}, {\"id\": \"SAL-775\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.8022662842752798, \"running_time\": 69310, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 09:08:07.815000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-775 | 0.8023 | (solution-5 , open)\"}, {\"id\": \"SAL-775\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 69310, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 09:08:07.815000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-775 | 0.8023 | (solution-5 , open)\"}, {\"id\": \"SAL-775\", \"resource\": \"running_time_day\", \"time_or_count\": 37.32472222222222, \"actual_or_best\": \"actual\", \"metric\": 0.8022662842752798, \"running_time\": 69310, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 09:08:07.815000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-775 | 0.8023 | (solution-5 , open)\"}, {\"id\": \"SAL-775\", \"resource\": \"running_time_day\", \"time_or_count\": 37.32472222222222, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 69310, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 09:08:07.815000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-775 | 0.8023 | (solution-5 , open)\"}, {\"id\": \"SAL-795\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.8030179729954843, \"running_time\": 109, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 09:40:00.141000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-795 | 0.8030 | (solution-5 , open)\"}, {\"id\": \"SAL-795\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 109, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 09:40:00.141000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-795 | 0.8030 | (solution-5 , open)\"}, {\"id\": \"SAL-795\", \"resource\": \"running_time_day\", \"time_or_count\": 37.32472222222222, \"actual_or_best\": \"actual\", \"metric\": 0.8030179729954843, \"running_time\": 109, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 09:40:00.141000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-795 | 0.8030 | (solution-5 , open)\"}, {\"id\": \"SAL-795\", \"resource\": \"running_time_day\", \"time_or_count\": 37.32472222222222, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 109, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 09:40:00.141000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-795 | 0.8030 | (solution-5 , open)\"}, {\"id\": \"SAL-796\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.8030179729954843, \"running_time\": 2244, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 10:17:52.292000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-796 | 0.8030 | (solution-5 , open)\"}, {\"id\": \"SAL-796\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 2244, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 10:17:52.292000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-796 | 0.8030 | (solution-5 , open)\"}, {\"id\": \"SAL-796\", \"resource\": \"running_time_day\", \"time_or_count\": 37.32472222222222, \"actual_or_best\": \"actual\", \"metric\": 0.8030179729954843, \"running_time\": 2244, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 10:17:52.292000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-796 | 0.8030 | (solution-5 , open)\"}, {\"id\": \"SAL-796\", \"resource\": \"running_time_day\", \"time_or_count\": 37.32472222222222, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 2244, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 10:17:52.292000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-796 | 0.8030 | (solution-5 , open)\"}, {\"id\": \"SAL-802\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.8022662842752798, \"running_time\": 935, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 10:44:08.261000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-802 | 0.8023 | (solution-5 , open)\"}, {\"id\": \"SAL-802\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 935, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 10:44:08.261000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-802 | 0.8023 | (solution-5 , open)\"}, {\"id\": \"SAL-802\", \"resource\": \"running_time_day\", \"time_or_count\": 37.32472222222222, \"actual_or_best\": \"actual\", \"metric\": 0.8022662842752798, \"running_time\": 935, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 10:44:08.261000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-802 | 0.8023 | (solution-5 , open)\"}, {\"id\": \"SAL-802\", \"resource\": \"running_time_day\", \"time_or_count\": 37.32472222222222, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 935, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 10:44:08.261000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-802 | 0.8023 | (solution-5 , open)\"}, {\"id\": \"SAL-820\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.799467533500517, \"running_time\": 6397, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 15:15:43.868000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-820 | 0.7995 | (solution-5 , open)\"}, {\"id\": \"SAL-820\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 6397, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 15:15:43.868000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-820 | 0.7995 | (solution-5 , open)\"}, {\"id\": \"SAL-820\", \"resource\": \"running_time_day\", \"time_or_count\": 37.32472222222222, \"actual_or_best\": \"actual\", \"metric\": 0.799467533500517, \"running_time\": 6397, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 15:15:43.868000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-820 | 0.7995 | (solution-5 , open)\"}, {\"id\": \"SAL-820\", \"resource\": \"running_time_day\", \"time_or_count\": 37.32472222222222, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 6397, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 15:15:43.868000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-820 | 0.7995 | (solution-5 , open)\"}, {\"id\": \"SAL-827\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.8030165848006928, \"running_time\": 6408, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 15:47:24.943000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-827 | 0.8030 | (solution-5 , open)\"}, {\"id\": \"SAL-827\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 6408, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 15:47:24.943000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-827 | 0.8030 | (solution-5 , open)\"}, {\"id\": \"SAL-827\", \"resource\": \"running_time_day\", \"time_or_count\": 37.32472222222222, \"actual_or_best\": \"actual\", \"metric\": 0.8030165848006928, \"running_time\": 6408, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 15:47:24.943000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-827 | 0.8030 | (solution-5 , open)\"}, {\"id\": \"SAL-827\", \"resource\": \"running_time_day\", \"time_or_count\": 37.32472222222222, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 6408, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 15:47:24.943000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-827 | 0.8030 | (solution-5 , open)\"}, {\"id\": \"SAL-793\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.7984181332757047, \"running_time\": 30702, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 16:02:04.113000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-793 | 0.7984 | (solution-5 , open)\"}, {\"id\": \"SAL-793\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 30702, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 16:02:04.113000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-793 | 0.7984 | (solution-5 , open)\"}, {\"id\": \"SAL-793\", \"resource\": \"running_time_day\", \"time_or_count\": 37.32472222222222, \"actual_or_best\": \"actual\", \"metric\": 0.7984181332757047, \"running_time\": 30702, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 16:02:04.113000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-793 | 0.7984 | (solution-5 , open)\"}, {\"id\": \"SAL-793\", \"resource\": \"running_time_day\", \"time_or_count\": 37.32472222222222, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 30702, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 16:02:04.113000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-793 | 0.7984 | (solution-5 , open)\"}, {\"id\": \"SAL-825\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.8009669339504422, \"running_time\": 8348, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 16:16:06.765000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-825 | 0.8010 | (solution-5 , open)\"}, {\"id\": \"SAL-825\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 8348, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 16:16:06.765000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-825 | 0.8010 | (solution-5 , open)\"}, {\"id\": \"SAL-825\", \"resource\": \"running_time_day\", \"time_or_count\": 37.32472222222222, \"actual_or_best\": \"actual\", \"metric\": 0.8009669339504422, \"running_time\": 8348, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 16:16:06.765000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-825 | 0.8010 | (solution-5 , open)\"}, {\"id\": \"SAL-825\", \"resource\": \"running_time_day\", \"time_or_count\": 37.32472222222222, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 8348, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 16:16:06.765000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-825 | 0.8010 | (solution-5 , open)\"}, {\"id\": \"SAL-828\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.8025671098384741, \"running_time\": 1890, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 20:02:21.195000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-828 | 0.8026 | (solution-5 , open)\"}, {\"id\": \"SAL-828\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 1890, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 20:02:21.195000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-828 | 0.8026 | (solution-5 , open)\"}, {\"id\": \"SAL-828\", \"resource\": \"running_time_day\", \"time_or_count\": 37.32472222222222, \"actual_or_best\": \"actual\", \"metric\": 0.8025671098384741, \"running_time\": 1890, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 20:02:21.195000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-828 | 0.8026 | (solution-5 , open)\"}, {\"id\": \"SAL-828\", \"resource\": \"running_time_day\", \"time_or_count\": 37.32472222222222, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 1890, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 20:02:21.195000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-828 | 0.8026 | (solution-5 , open)\"}, {\"id\": \"SAL-829\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.7996176836506672, \"running_time\": 8026, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 21:47:14.629000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-829 | 0.7996 | (solution-5 , open)\"}, {\"id\": \"SAL-829\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 8026, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 21:47:14.629000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-829 | 0.7996 | (solution-5 , open)\"}, {\"id\": \"SAL-829\", \"resource\": \"running_time_day\", \"time_or_count\": 37.32472222222222, \"actual_or_best\": \"actual\", \"metric\": 0.7996176836506672, \"running_time\": 8026, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 21:47:14.629000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-829 | 0.7996 | (solution-5 , open)\"}, {\"id\": \"SAL-829\", \"resource\": \"running_time_day\", \"time_or_count\": 37.32472222222222, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 8026, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-02 21:47:14.629000+00:00\", \"timestamp_day\": \"2018-09-02\", \"text\": \"SAL-829 | 0.7996 | (solution-5 , open)\"}, {\"id\": \"SAL-833\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"actual\", \"metric\": 0.8075430753091922, \"running_time\": 46650, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 08:36:40.061000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-833 | 0.8075 | (solution-5 , open)\"}, {\"id\": \"SAL-833\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 46650, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 08:36:40.061000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-833 | 0.8075 | (solution-5 , open)\"}, {\"id\": \"SAL-833\", \"resource\": \"running_time_day\", \"time_or_count\": 58.845, \"actual_or_best\": \"actual\", \"metric\": 0.8075430753091922, \"running_time\": 46650, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 08:36:40.061000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-833 | 0.8075 | (solution-5 , open)\"}, {\"id\": \"SAL-833\", \"resource\": \"running_time_day\", \"time_or_count\": 58.845, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 46650, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 08:36:40.061000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-833 | 0.8075 | (solution-5 , open)\"}, {\"id\": \"SAL-847\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"actual\", \"metric\": 0.8075430753091922, \"running_time\": 127, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 08:58:31.105000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-847 | 0.8075 | (solution-5 , open)\"}, {\"id\": \"SAL-847\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 127, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 08:58:31.105000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-847 | 0.8075 | (solution-5 , open)\"}, {\"id\": \"SAL-847\", \"resource\": \"running_time_day\", \"time_or_count\": 58.845, \"actual_or_best\": \"actual\", \"metric\": 0.8075430753091922, \"running_time\": 127, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 08:58:31.105000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-847 | 0.8075 | (solution-5 , open)\"}, {\"id\": \"SAL-847\", \"resource\": \"running_time_day\", \"time_or_count\": 58.845, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 127, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 08:58:31.105000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-847 | 0.8075 | (solution-5 , open)\"}, {\"id\": \"SAL-848\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"actual\", \"metric\": 0.804940847894371, \"running_time\": 163, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 09:01:25.806000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-848 | 0.8049 | (solution-5 , open)\"}, {\"id\": \"SAL-848\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 163, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 09:01:25.806000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-848 | 0.8049 | (solution-5 , open)\"}, {\"id\": \"SAL-848\", \"resource\": \"running_time_day\", \"time_or_count\": 58.845, \"actual_or_best\": \"actual\", \"metric\": 0.804940847894371, \"running_time\": 163, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 09:01:25.806000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-848 | 0.8049 | (solution-5 , open)\"}, {\"id\": \"SAL-848\", \"resource\": \"running_time_day\", \"time_or_count\": 58.845, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 163, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 09:01:25.806000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-848 | 0.8049 | (solution-5 , open)\"}, {\"id\": \"SAL-849\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"actual\", \"metric\": 0.8075430753091922, \"running_time\": 1469, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 09:26:20.666000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-849 | 0.8075 | (solution-5 , open)\"}, {\"id\": \"SAL-849\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 1469, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 09:26:20.666000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-849 | 0.8075 | (solution-5 , open)\"}, {\"id\": \"SAL-849\", \"resource\": \"running_time_day\", \"time_or_count\": 58.845, \"actual_or_best\": \"actual\", \"metric\": 0.8075430753091922, \"running_time\": 1469, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 09:26:20.666000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-849 | 0.8075 | (solution-5 , open)\"}, {\"id\": \"SAL-849\", \"resource\": \"running_time_day\", \"time_or_count\": 58.845, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 1469, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 09:26:20.666000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-849 | 0.8075 | (solution-5 , open)\"}, {\"id\": \"SAL-891\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"actual\", \"metric\": 0.8075, \"running_time\": 5015, \"owner\": \"czakon\", \"tags\": [\"solution-1\"], \"timestamp\": \"2018-09-03 15:09:27.911000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-891 | 0.8075 | (solution-1)\"}, {\"id\": \"SAL-891\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 5015, \"owner\": \"czakon\", \"tags\": [\"solution-1\"], \"timestamp\": \"2018-09-03 15:09:27.911000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-891 | 0.8075 | (solution-1)\"}, {\"id\": \"SAL-891\", \"resource\": \"running_time_day\", \"time_or_count\": 58.845, \"actual_or_best\": \"actual\", \"metric\": 0.8075, \"running_time\": 5015, \"owner\": \"czakon\", \"tags\": [\"solution-1\"], \"timestamp\": \"2018-09-03 15:09:27.911000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-891 | 0.8075 | (solution-1)\"}, {\"id\": \"SAL-891\", \"resource\": \"running_time_day\", \"time_or_count\": 58.845, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 5015, \"owner\": \"czakon\", \"tags\": [\"solution-1\"], \"timestamp\": \"2018-09-03 15:09:27.911000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-891 | 0.8075 | (solution-1)\"}, {\"id\": \"SAL-836\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"actual\", \"metric\": 0.807967075021048, \"running_time\": 46727, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 19:09:03.802000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-836 | 0.8080 | (solution-5 , open)\"}, {\"id\": \"SAL-836\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 46727, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 19:09:03.802000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-836 | 0.8080 | (solution-5 , open)\"}, {\"id\": \"SAL-836\", \"resource\": \"running_time_day\", \"time_or_count\": 58.845, \"actual_or_best\": \"actual\", \"metric\": 0.807967075021048, \"running_time\": 46727, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 19:09:03.802000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-836 | 0.8080 | (solution-5 , open)\"}, {\"id\": \"SAL-836\", \"resource\": \"running_time_day\", \"time_or_count\": 58.845, \"actual_or_best\": \"best\", \"metric\": 0.8082499999999999, \"running_time\": 46727, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 19:09:03.802000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-836 | 0.8080 | (solution-5 , open)\"}, {\"id\": \"SAL-839\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"actual\", \"metric\": 0.8124680152416285, \"running_time\": 49336, \"owner\": \"czakon\", \"tags\": [\"solution-6\", \"open\"], \"timestamp\": \"2018-09-03 19:59:56.765000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-839 | 0.8125 | (solution-6 , open)\"}, {\"id\": \"SAL-839\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"best\", \"metric\": 0.8124680152416285, \"running_time\": 49336, \"owner\": \"czakon\", \"tags\": [\"solution-6\", \"open\"], \"timestamp\": \"2018-09-03 19:59:56.765000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-839 | 0.8125 | (solution-6 , open)\"}, {\"id\": \"SAL-839\", \"resource\": \"running_time_day\", \"time_or_count\": 58.845, \"actual_or_best\": \"actual\", \"metric\": 0.8124680152416285, \"running_time\": 49336, \"owner\": \"czakon\", \"tags\": [\"solution-6\", \"open\"], \"timestamp\": \"2018-09-03 19:59:56.765000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-839 | 0.8125 | (solution-6 , open)\"}, {\"id\": \"SAL-839\", \"resource\": \"running_time_day\", \"time_or_count\": 58.845, \"actual_or_best\": \"best\", \"metric\": 0.8124680152416285, \"running_time\": 49336, \"owner\": \"czakon\", \"tags\": [\"solution-6\", \"open\"], \"timestamp\": \"2018-09-03 19:59:56.765000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-839 | 0.8125 | (solution-6 , open)\"}, {\"id\": \"SAL-840\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"actual\", \"metric\": 0.8049925862894379, \"running_time\": 56965, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 22:17:11.244000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-840 | 0.8050 | (solution-5 , open)\"}, {\"id\": \"SAL-840\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"best\", \"metric\": 0.8124680152416285, \"running_time\": 56965, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 22:17:11.244000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-840 | 0.8050 | (solution-5 , open)\"}, {\"id\": \"SAL-840\", \"resource\": \"running_time_day\", \"time_or_count\": 58.845, \"actual_or_best\": \"actual\", \"metric\": 0.8049925862894379, \"running_time\": 56965, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 22:17:11.244000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-840 | 0.8050 | (solution-5 , open)\"}, {\"id\": \"SAL-840\", \"resource\": \"running_time_day\", \"time_or_count\": 58.845, \"actual_or_best\": \"best\", \"metric\": 0.8124680152416285, \"running_time\": 56965, \"owner\": \"czakon\", \"tags\": [\"solution-5\", \"open\"], \"timestamp\": \"2018-09-03 22:17:11.244000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-840 | 0.8050 | (solution-5 , open)\"}, {\"id\": \"SAL-910\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"actual\", \"metric\": 0.8082727029878455, \"running_time\": 5390, \"owner\": \"czakon\", \"tags\": [\"solution-6\", \"open\"], \"timestamp\": \"2018-09-03 22:22:00.640000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-910 | 0.8083 | (solution-6 , open)\"}, {\"id\": \"SAL-910\", \"resource\": \"experiment_count_day\", \"time_or_count\": 9.0, \"actual_or_best\": \"best\", \"metric\": 0.8124680152416285, \"running_time\": 5390, \"owner\": \"czakon\", \"tags\": [\"solution-6\", \"open\"], \"timestamp\": \"2018-09-03 22:22:00.640000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-910 | 0.8083 | (solution-6 , open)\"}, {\"id\": \"SAL-910\", \"resource\": \"running_time_day\", \"time_or_count\": 58.845, \"actual_or_best\": \"actual\", \"metric\": 0.8082727029878455, \"running_time\": 5390, \"owner\": \"czakon\", \"tags\": [\"solution-6\", \"open\"], \"timestamp\": \"2018-09-03 22:22:00.640000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-910 | 0.8083 | (solution-6 , open)\"}, {\"id\": \"SAL-910\", \"resource\": \"running_time_day\", \"time_or_count\": 58.845, \"actual_or_best\": \"best\", \"metric\": 0.8124680152416285, \"running_time\": 5390, \"owner\": \"czakon\", \"tags\": [\"solution-6\", \"open\"], \"timestamp\": \"2018-09-03 22:22:00.640000+00:00\", \"timestamp_day\": \"2018-09-03\", \"text\": \"SAL-910 | 0.8083 | (solution-6 , open)\"}, {\"id\": \"SAL-988\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"actual\", \"metric\": 0.8087501669585627, \"running_time\": 17210, \"owner\": \"czakon\", \"tags\": [\"solution-7\", \"baseline\"], \"timestamp\": \"2018-09-04 19:43:21.595000+00:00\", \"timestamp_day\": \"2018-09-04\", \"text\": \"SAL-988 | 0.8088 | (solution-7 , baseline)\"}, {\"id\": \"SAL-988\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"best\", \"metric\": 0.8124680152416285, \"running_time\": 17210, \"owner\": \"czakon\", \"tags\": [\"solution-7\", \"baseline\"], \"timestamp\": \"2018-09-04 19:43:21.595000+00:00\", \"timestamp_day\": \"2018-09-04\", \"text\": \"SAL-988 | 0.8088 | (solution-7 , baseline)\"}, {\"id\": \"SAL-988\", \"resource\": \"running_time_day\", \"time_or_count\": 10.246666666666666, \"actual_or_best\": \"actual\", \"metric\": 0.8087501669585627, \"running_time\": 17210, \"owner\": \"czakon\", \"tags\": [\"solution-7\", \"baseline\"], \"timestamp\": \"2018-09-04 19:43:21.595000+00:00\", \"timestamp_day\": \"2018-09-04\", \"text\": \"SAL-988 | 0.8088 | (solution-7 , baseline)\"}, {\"id\": \"SAL-988\", \"resource\": \"running_time_day\", \"time_or_count\": 10.246666666666666, \"actual_or_best\": \"best\", \"metric\": 0.8124680152416285, \"running_time\": 17210, \"owner\": \"czakon\", \"tags\": [\"solution-7\", \"baseline\"], \"timestamp\": \"2018-09-04 19:43:21.595000+00:00\", \"timestamp_day\": \"2018-09-04\", \"text\": \"SAL-988 | 0.8088 | (solution-7 , baseline)\"}, {\"id\": \"SAL-987\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"actual\", \"metric\": 0.8104235419827623, \"running_time\": 19678, \"owner\": \"czakon\", \"tags\": [\"solution-7\", \"lovash\"], \"timestamp\": \"2018-09-04 20:22:14.450000+00:00\", \"timestamp_day\": \"2018-09-04\", \"text\": \"SAL-987 | 0.8104 | (solution-7 , lovash)\"}, {\"id\": \"SAL-987\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"best\", \"metric\": 0.8124680152416285, \"running_time\": 19678, \"owner\": \"czakon\", \"tags\": [\"solution-7\", \"lovash\"], \"timestamp\": \"2018-09-04 20:22:14.450000+00:00\", \"timestamp_day\": \"2018-09-04\", \"text\": \"SAL-987 | 0.8104 | (solution-7 , lovash)\"}, {\"id\": \"SAL-987\", \"resource\": \"running_time_day\", \"time_or_count\": 10.246666666666666, \"actual_or_best\": \"actual\", \"metric\": 0.8104235419827623, \"running_time\": 19678, \"owner\": \"czakon\", \"tags\": [\"solution-7\", \"lovash\"], \"timestamp\": \"2018-09-04 20:22:14.450000+00:00\", \"timestamp_day\": \"2018-09-04\", \"text\": \"SAL-987 | 0.8104 | (solution-7 , lovash)\"}, {\"id\": \"SAL-987\", \"resource\": \"running_time_day\", \"time_or_count\": 10.246666666666666, \"actual_or_best\": \"best\", \"metric\": 0.8124680152416285, \"running_time\": 19678, \"owner\": \"czakon\", \"tags\": [\"solution-7\", \"lovash\"], \"timestamp\": \"2018-09-04 20:22:14.450000+00:00\", \"timestamp_day\": \"2018-09-04\", \"text\": \"SAL-987 | 0.8104 | (solution-7 , lovash)\"}, {\"id\": \"SAL-989\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"actual\", \"metric\": 0.8087501669585627, \"running_time\": 770, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-09-05 06:46:12.800000+00:00\", \"timestamp_day\": \"2018-09-05\", \"text\": \"SAL-989 | 0.8088 | ()\"}, {\"id\": \"SAL-989\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"best\", \"metric\": 0.8124680152416285, \"running_time\": 770, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-09-05 06:46:12.800000+00:00\", \"timestamp_day\": \"2018-09-05\", \"text\": \"SAL-989 | 0.8088 | ()\"}, {\"id\": \"SAL-989\", \"resource\": \"running_time_day\", \"time_or_count\": 37.85166666666667, \"actual_or_best\": \"actual\", \"metric\": 0.8087501669585627, \"running_time\": 770, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-09-05 06:46:12.800000+00:00\", \"timestamp_day\": \"2018-09-05\", \"text\": \"SAL-989 | 0.8088 | ()\"}, {\"id\": \"SAL-989\", \"resource\": \"running_time_day\", \"time_or_count\": 37.85166666666667, \"actual_or_best\": \"best\", \"metric\": 0.8124680152416285, \"running_time\": 770, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-09-05 06:46:12.800000+00:00\", \"timestamp_day\": \"2018-09-05\", \"text\": \"SAL-989 | 0.8088 | ()\"}, {\"id\": \"SAL-991\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"actual\", \"metric\": 0.8104235419827623, \"running_time\": 776, \"owner\": \"czakon\", \"tags\": [\"solution-7\"], \"timestamp\": \"2018-09-05 07:09:51.888000+00:00\", \"timestamp_day\": \"2018-09-05\", \"text\": \"SAL-991 | 0.8104 | (solution-7)\"}, {\"id\": \"SAL-991\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"best\", \"metric\": 0.8124680152416285, \"running_time\": 776, \"owner\": \"czakon\", \"tags\": [\"solution-7\"], \"timestamp\": \"2018-09-05 07:09:51.888000+00:00\", \"timestamp_day\": \"2018-09-05\", \"text\": \"SAL-991 | 0.8104 | (solution-7)\"}, {\"id\": \"SAL-991\", \"resource\": \"running_time_day\", \"time_or_count\": 37.85166666666667, \"actual_or_best\": \"actual\", \"metric\": 0.8104235419827623, \"running_time\": 776, \"owner\": \"czakon\", \"tags\": [\"solution-7\"], \"timestamp\": \"2018-09-05 07:09:51.888000+00:00\", \"timestamp_day\": \"2018-09-05\", \"text\": \"SAL-991 | 0.8104 | (solution-7)\"}, {\"id\": \"SAL-991\", \"resource\": \"running_time_day\", \"time_or_count\": 37.85166666666667, \"actual_or_best\": \"best\", \"metric\": 0.8124680152416285, \"running_time\": 776, \"owner\": \"czakon\", \"tags\": [\"solution-7\"], \"timestamp\": \"2018-09-05 07:09:51.888000+00:00\", \"timestamp_day\": \"2018-09-05\", \"text\": \"SAL-991 | 0.8104 | (solution-7)\"}, {\"id\": \"SAL-984\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"actual\", \"metric\": 0.8188218578398488, \"running_time\": 67362, \"owner\": \"czakon\", \"tags\": [\"solution-6\", \"open\"], \"timestamp\": \"2018-09-05 09:24:21.968000+00:00\", \"timestamp_day\": \"2018-09-05\", \"text\": \"SAL-984 | 0.8188 | (solution-6 , open)\"}, {\"id\": \"SAL-984\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"best\", \"metric\": 0.8188218578398488, \"running_time\": 67362, \"owner\": \"czakon\", \"tags\": [\"solution-6\", \"open\"], \"timestamp\": \"2018-09-05 09:24:21.968000+00:00\", \"timestamp_day\": \"2018-09-05\", \"text\": \"SAL-984 | 0.8188 | (solution-6 , open)\"}, {\"id\": \"SAL-984\", \"resource\": \"running_time_day\", \"time_or_count\": 37.85166666666667, \"actual_or_best\": \"actual\", \"metric\": 0.8188218578398488, \"running_time\": 67362, \"owner\": \"czakon\", \"tags\": [\"solution-6\", \"open\"], \"timestamp\": \"2018-09-05 09:24:21.968000+00:00\", \"timestamp_day\": \"2018-09-05\", \"text\": \"SAL-984 | 0.8188 | (solution-6 , open)\"}, {\"id\": \"SAL-984\", \"resource\": \"running_time_day\", \"time_or_count\": 37.85166666666667, \"actual_or_best\": \"best\", \"metric\": 0.8188218578398488, \"running_time\": 67362, \"owner\": \"czakon\", \"tags\": [\"solution-6\", \"open\"], \"timestamp\": \"2018-09-05 09:24:21.968000+00:00\", \"timestamp_day\": \"2018-09-05\", \"text\": \"SAL-984 | 0.8188 | (solution-6 , open)\"}, {\"id\": \"SAL-986\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"actual\", \"metric\": 0.8205491098294697, \"running_time\": 67358, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-09-05 09:26:34.547000+00:00\", \"timestamp_day\": \"2018-09-05\", \"text\": \"SAL-986 | 0.8205 | ()\"}, {\"id\": \"SAL-986\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"best\", \"metric\": 0.8205491098294697, \"running_time\": 67358, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-09-05 09:26:34.547000+00:00\", \"timestamp_day\": \"2018-09-05\", \"text\": \"SAL-986 | 0.8205 | ()\"}, {\"id\": \"SAL-986\", \"resource\": \"running_time_day\", \"time_or_count\": 37.85166666666667, \"actual_or_best\": \"actual\", \"metric\": 0.8205491098294697, \"running_time\": 67358, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-09-05 09:26:34.547000+00:00\", \"timestamp_day\": \"2018-09-05\", \"text\": \"SAL-986 | 0.8205 | ()\"}, {\"id\": \"SAL-986\", \"resource\": \"running_time_day\", \"time_or_count\": 37.85166666666667, \"actual_or_best\": \"best\", \"metric\": 0.8205491098294697, \"running_time\": 67358, \"owner\": \"czakon\", \"tags\": [], \"timestamp\": \"2018-09-05 09:26:34.547000+00:00\", \"timestamp_day\": \"2018-09-05\", \"text\": \"SAL-986 | 0.8205 | ()\"}, {\"id\": \"SAL-1056\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.8174240207223716, \"running_time\": 20299, \"owner\": \"czakon\", \"tags\": [\"hypercolumn\", \"solution-8\"], \"timestamp\": \"2018-09-06 01:54:38.405000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1056 | 0.8174 | (hypercolumn , solution-8)\"}, {\"id\": \"SAL-1056\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8205491098294697, \"running_time\": 20299, \"owner\": \"czakon\", \"tags\": [\"hypercolumn\", \"solution-8\"], \"timestamp\": \"2018-09-06 01:54:38.405000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1056 | 0.8174 | (hypercolumn , solution-8)\"}, {\"id\": \"SAL-1056\", \"resource\": \"running_time_day\", \"time_or_count\": 50.57666666666667, \"actual_or_best\": \"actual\", \"metric\": 0.8174240207223716, \"running_time\": 20299, \"owner\": \"czakon\", \"tags\": [\"hypercolumn\", \"solution-8\"], \"timestamp\": \"2018-09-06 01:54:38.405000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1056 | 0.8174 | (hypercolumn , solution-8)\"}, {\"id\": \"SAL-1056\", \"resource\": \"running_time_day\", \"time_or_count\": 50.57666666666667, \"actual_or_best\": \"best\", \"metric\": 0.8205491098294697, \"running_time\": 20299, \"owner\": \"czakon\", \"tags\": [\"hypercolumn\", \"solution-8\"], \"timestamp\": \"2018-09-06 01:54:38.405000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1056 | 0.8174 | (hypercolumn , solution-8)\"}, {\"id\": \"SAL-1057\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.8140985438211826, \"running_time\": 20869, \"owner\": \"czakon\", \"tags\": [\"hypercolumn\", \"solution-8\"], \"timestamp\": \"2018-09-06 02:39:10.580000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1057 | 0.8141 | (hypercolumn , solution-8)\"}, {\"id\": \"SAL-1057\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8205491098294697, \"running_time\": 20869, \"owner\": \"czakon\", \"tags\": [\"hypercolumn\", \"solution-8\"], \"timestamp\": \"2018-09-06 02:39:10.580000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1057 | 0.8141 | (hypercolumn , solution-8)\"}, {\"id\": \"SAL-1057\", \"resource\": \"running_time_day\", \"time_or_count\": 50.57666666666667, \"actual_or_best\": \"actual\", \"metric\": 0.8140985438211826, \"running_time\": 20869, \"owner\": \"czakon\", \"tags\": [\"hypercolumn\", \"solution-8\"], \"timestamp\": \"2018-09-06 02:39:10.580000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1057 | 0.8141 | (hypercolumn , solution-8)\"}, {\"id\": \"SAL-1057\", \"resource\": \"running_time_day\", \"time_or_count\": 50.57666666666667, \"actual_or_best\": \"best\", \"metric\": 0.8205491098294697, \"running_time\": 20869, \"owner\": \"czakon\", \"tags\": [\"hypercolumn\", \"solution-8\"], \"timestamp\": \"2018-09-06 02:39:10.580000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1057 | 0.8141 | (hypercolumn , solution-8)\"}, {\"id\": \"SAL-1038\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.8293243393318356, \"running_time\": 47563, \"owner\": \"czakon\", \"tags\": [\"solution-7\"], \"timestamp\": \"2018-09-06 03:11:19.567000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1038 | 0.8293 | (solution-7)\"}, {\"id\": \"SAL-1038\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8293243393318356, \"running_time\": 47563, \"owner\": \"czakon\", \"tags\": [\"solution-7\"], \"timestamp\": \"2018-09-06 03:11:19.567000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1038 | 0.8293 | (solution-7)\"}, {\"id\": \"SAL-1038\", \"resource\": \"running_time_day\", \"time_or_count\": 50.57666666666667, \"actual_or_best\": \"actual\", \"metric\": 0.8293243393318356, \"running_time\": 47563, \"owner\": \"czakon\", \"tags\": [\"solution-7\"], \"timestamp\": \"2018-09-06 03:11:19.567000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1038 | 0.8293 | (solution-7)\"}, {\"id\": \"SAL-1038\", \"resource\": \"running_time_day\", \"time_or_count\": 50.57666666666667, \"actual_or_best\": \"best\", \"metric\": 0.8293243393318356, \"running_time\": 47563, \"owner\": \"czakon\", \"tags\": [\"solution-7\"], \"timestamp\": \"2018-09-06 03:11:19.567000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1038 | 0.8293 | (solution-7)\"}, {\"id\": \"SAL-1059\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.8293243393318356, \"running_time\": 1415, \"owner\": \"czakon\", \"tags\": [\"solution-7\"], \"timestamp\": \"2018-09-06 03:54:49.823000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1059 | 0.8293 | (solution-7)\"}, {\"id\": \"SAL-1059\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8293243393318356, \"running_time\": 1415, \"owner\": \"czakon\", \"tags\": [\"solution-7\"], \"timestamp\": \"2018-09-06 03:54:49.823000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1059 | 0.8293 | (solution-7)\"}, {\"id\": \"SAL-1059\", \"resource\": \"running_time_day\", \"time_or_count\": 50.57666666666667, \"actual_or_best\": \"actual\", \"metric\": 0.8293243393318356, \"running_time\": 1415, \"owner\": \"czakon\", \"tags\": [\"solution-7\"], \"timestamp\": \"2018-09-06 03:54:49.823000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1059 | 0.8293 | (solution-7)\"}, {\"id\": \"SAL-1059\", \"resource\": \"running_time_day\", \"time_or_count\": 50.57666666666667, \"actual_or_best\": \"best\", \"metric\": 0.8293243393318356, \"running_time\": 1415, \"owner\": \"czakon\", \"tags\": [\"solution-7\"], \"timestamp\": \"2018-09-06 03:54:49.823000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1059 | 0.8293 | (solution-7)\"}, {\"id\": \"SAL-1036\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.8288233886059974, \"running_time\": 51382, \"owner\": \"czakon\", \"tags\": [\"solution-7\"], \"timestamp\": \"2018-09-06 04:13:07.514000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1036 | 0.8288 | (solution-7)\"}, {\"id\": \"SAL-1036\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8293243393318356, \"running_time\": 51382, \"owner\": \"czakon\", \"tags\": [\"solution-7\"], \"timestamp\": \"2018-09-06 04:13:07.514000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1036 | 0.8288 | (solution-7)\"}, {\"id\": \"SAL-1036\", \"resource\": \"running_time_day\", \"time_or_count\": 50.57666666666667, \"actual_or_best\": \"actual\", \"metric\": 0.8288233886059974, \"running_time\": 51382, \"owner\": \"czakon\", \"tags\": [\"solution-7\"], \"timestamp\": \"2018-09-06 04:13:07.514000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1036 | 0.8288 | (solution-7)\"}, {\"id\": \"SAL-1036\", \"resource\": \"running_time_day\", \"time_or_count\": 50.57666666666667, \"actual_or_best\": \"best\", \"metric\": 0.8293243393318356, \"running_time\": 51382, \"owner\": \"czakon\", \"tags\": [\"solution-7\"], \"timestamp\": \"2018-09-06 04:13:07.514000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1036 | 0.8288 | (solution-7)\"}, {\"id\": \"SAL-1061\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.8288233886059974, \"running_time\": 1423, \"owner\": \"czakon\", \"tags\": [\"solution-7\"], \"timestamp\": \"2018-09-06 04:42:58.985000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1061 | 0.8288 | (solution-7)\"}, {\"id\": \"SAL-1061\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8293243393318356, \"running_time\": 1423, \"owner\": \"czakon\", \"tags\": [\"solution-7\"], \"timestamp\": \"2018-09-06 04:42:58.985000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1061 | 0.8288 | (solution-7)\"}, {\"id\": \"SAL-1061\", \"resource\": \"running_time_day\", \"time_or_count\": 50.57666666666667, \"actual_or_best\": \"actual\", \"metric\": 0.8288233886059974, \"running_time\": 1423, \"owner\": \"czakon\", \"tags\": [\"solution-7\"], \"timestamp\": \"2018-09-06 04:42:58.985000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1061 | 0.8288 | (solution-7)\"}, {\"id\": \"SAL-1061\", \"resource\": \"running_time_day\", \"time_or_count\": 50.57666666666667, \"actual_or_best\": \"best\", \"metric\": 0.8293243393318356, \"running_time\": 1423, \"owner\": \"czakon\", \"tags\": [\"solution-7\"], \"timestamp\": \"2018-09-06 04:42:58.985000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1061 | 0.8288 | (solution-7)\"}, {\"id\": \"SAL-1069\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.8174240207223716, \"running_time\": 572, \"owner\": \"czakon\", \"tags\": [\"hypercolumn\", \"solution-8\"], \"timestamp\": \"2018-09-06 05:38:17.027000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1069 | 0.8174 | (hypercolumn , solution-8)\"}, {\"id\": \"SAL-1069\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8293243393318356, \"running_time\": 572, \"owner\": \"czakon\", \"tags\": [\"hypercolumn\", \"solution-8\"], \"timestamp\": \"2018-09-06 05:38:17.027000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1069 | 0.8174 | (hypercolumn , solution-8)\"}, {\"id\": \"SAL-1069\", \"resource\": \"running_time_day\", \"time_or_count\": 50.57666666666667, \"actual_or_best\": \"actual\", \"metric\": 0.8174240207223716, \"running_time\": 572, \"owner\": \"czakon\", \"tags\": [\"hypercolumn\", \"solution-8\"], \"timestamp\": \"2018-09-06 05:38:17.027000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1069 | 0.8174 | (hypercolumn , solution-8)\"}, {\"id\": \"SAL-1069\", \"resource\": \"running_time_day\", \"time_or_count\": 50.57666666666667, \"actual_or_best\": \"best\", \"metric\": 0.8293243393318356, \"running_time\": 572, \"owner\": \"czakon\", \"tags\": [\"hypercolumn\", \"solution-8\"], \"timestamp\": \"2018-09-06 05:38:17.027000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1069 | 0.8174 | (hypercolumn , solution-8)\"}, {\"id\": \"SAL-1070\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.8313981522752137, \"running_time\": 3168, \"owner\": \"czakon\", \"tags\": [\"solution-7\", \"tta\"], \"timestamp\": \"2018-09-06 08:31:21.476000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1070 | 0.8314 | (solution-7 , tta)\"}, {\"id\": \"SAL-1070\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 3168, \"owner\": \"czakon\", \"tags\": [\"solution-7\", \"tta\"], \"timestamp\": \"2018-09-06 08:31:21.476000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1070 | 0.8314 | (solution-7 , tta)\"}, {\"id\": \"SAL-1070\", \"resource\": \"running_time_day\", \"time_or_count\": 50.57666666666667, \"actual_or_best\": \"actual\", \"metric\": 0.8313981522752137, \"running_time\": 3168, \"owner\": \"czakon\", \"tags\": [\"solution-7\", \"tta\"], \"timestamp\": \"2018-09-06 08:31:21.476000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1070 | 0.8314 | (solution-7 , tta)\"}, {\"id\": \"SAL-1070\", \"resource\": \"running_time_day\", \"time_or_count\": 50.57666666666667, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 3168, \"owner\": \"czakon\", \"tags\": [\"solution-7\", \"tta\"], \"timestamp\": \"2018-09-06 08:31:21.476000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1070 | 0.8314 | (solution-7 , tta)\"}, {\"id\": \"SAL-1071\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.8147733190461826, \"running_time\": 17105, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"hypercolumn\"], \"timestamp\": \"2018-09-06 13:47:58.012000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1071 | 0.8148 | (solution-8 , hypercolumn)\"}, {\"id\": \"SAL-1071\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 17105, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"hypercolumn\"], \"timestamp\": \"2018-09-06 13:47:58.012000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1071 | 0.8148 | (solution-8 , hypercolumn)\"}, {\"id\": \"SAL-1071\", \"resource\": \"running_time_day\", \"time_or_count\": 50.57666666666667, \"actual_or_best\": \"actual\", \"metric\": 0.8147733190461826, \"running_time\": 17105, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"hypercolumn\"], \"timestamp\": \"2018-09-06 13:47:58.012000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1071 | 0.8148 | (solution-8 , hypercolumn)\"}, {\"id\": \"SAL-1071\", \"resource\": \"running_time_day\", \"time_or_count\": 50.57666666666667, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 17105, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"hypercolumn\"], \"timestamp\": \"2018-09-06 13:47:58.012000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1071 | 0.8148 | (solution-8 , hypercolumn)\"}, {\"id\": \"SAL-1085\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"actual\", \"metric\": 0.818647508077793, \"running_time\": 18280, \"owner\": \"czakon\", \"tags\": [\"hypercolumn\", \"solution-8\"], \"timestamp\": \"2018-09-06 15:05:33.070000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1085 | 0.8186 | (hypercolumn , solution-8)\"}, {\"id\": \"SAL-1085\", \"resource\": \"experiment_count_day\", \"time_or_count\": 10.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 18280, \"owner\": \"czakon\", \"tags\": [\"hypercolumn\", \"solution-8\"], \"timestamp\": \"2018-09-06 15:05:33.070000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1085 | 0.8186 | (hypercolumn , solution-8)\"}, {\"id\": \"SAL-1085\", \"resource\": \"running_time_day\", \"time_or_count\": 50.57666666666667, \"actual_or_best\": \"actual\", \"metric\": 0.818647508077793, \"running_time\": 18280, \"owner\": \"czakon\", \"tags\": [\"hypercolumn\", \"solution-8\"], \"timestamp\": \"2018-09-06 15:05:33.070000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1085 | 0.8186 | (hypercolumn , solution-8)\"}, {\"id\": \"SAL-1085\", \"resource\": \"running_time_day\", \"time_or_count\": 50.57666666666667, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 18280, \"owner\": \"czakon\", \"tags\": [\"hypercolumn\", \"solution-8\"], \"timestamp\": \"2018-09-06 15:05:33.070000+00:00\", \"timestamp_day\": \"2018-09-06\", \"text\": \"SAL-1085 | 0.8186 | (hypercolumn , solution-8)\"}, {\"id\": \"SAL-1078\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"actual\", \"metric\": 0.8268484001242622, \"running_time\": 59356, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"hypercolumn\"], \"timestamp\": \"2018-09-07 02:06:37.338000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1078 | 0.8268 | (solution-8 , hypercolumn)\"}, {\"id\": \"SAL-1078\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 59356, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"hypercolumn\"], \"timestamp\": \"2018-09-07 02:06:37.338000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1078 | 0.8268 | (solution-8 , hypercolumn)\"}, {\"id\": \"SAL-1078\", \"resource\": \"running_time_day\", \"time_or_count\": 86.87972222222223, \"actual_or_best\": \"actual\", \"metric\": 0.8268484001242622, \"running_time\": 59356, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"hypercolumn\"], \"timestamp\": \"2018-09-07 02:06:37.338000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1078 | 0.8268 | (solution-8 , hypercolumn)\"}, {\"id\": \"SAL-1078\", \"resource\": \"running_time_day\", \"time_or_count\": 86.87972222222223, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 59356, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"hypercolumn\"], \"timestamp\": \"2018-09-07 02:06:37.338000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1078 | 0.8268 | (solution-8 , hypercolumn)\"}, {\"id\": \"SAL-1132\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"actual\", \"metric\": 0.816473182327755, \"running_time\": 34076, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"se_blocks\"], \"timestamp\": \"2018-09-07 06:42:04.753000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1132 | 0.8165 | (solution-8 , se_blocks)\"}, {\"id\": \"SAL-1132\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 34076, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"se_blocks\"], \"timestamp\": \"2018-09-07 06:42:04.753000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1132 | 0.8165 | (solution-8 , se_blocks)\"}, {\"id\": \"SAL-1132\", \"resource\": \"running_time_day\", \"time_or_count\": 86.87972222222223, \"actual_or_best\": \"actual\", \"metric\": 0.816473182327755, \"running_time\": 34076, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"se_blocks\"], \"timestamp\": \"2018-09-07 06:42:04.753000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1132 | 0.8165 | (solution-8 , se_blocks)\"}, {\"id\": \"SAL-1132\", \"resource\": \"running_time_day\", \"time_or_count\": 86.87972222222223, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 34076, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"se_blocks\"], \"timestamp\": \"2018-09-07 06:42:04.753000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1132 | 0.8165 | (solution-8 , se_blocks)\"}, {\"id\": \"SAL-1135\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"actual\", \"metric\": 0.8183479206342774, \"running_time\": 31386, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"se_block\"], \"timestamp\": \"2018-09-07 06:49:28.058000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1135 | 0.8183 | (solution-8 , se_block)\"}, {\"id\": \"SAL-1135\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 31386, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"se_block\"], \"timestamp\": \"2018-09-07 06:49:28.058000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1135 | 0.8183 | (solution-8 , se_block)\"}, {\"id\": \"SAL-1135\", \"resource\": \"running_time_day\", \"time_or_count\": 86.87972222222223, \"actual_or_best\": \"actual\", \"metric\": 0.8183479206342774, \"running_time\": 31386, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"se_block\"], \"timestamp\": \"2018-09-07 06:49:28.058000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1135 | 0.8183 | (solution-8 , se_block)\"}, {\"id\": \"SAL-1135\", \"resource\": \"running_time_day\", \"time_or_count\": 86.87972222222223, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 31386, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"se_block\"], \"timestamp\": \"2018-09-07 06:49:28.058000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1135 | 0.8183 | (solution-8 , se_block)\"}, {\"id\": \"SAL-1134\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"actual\", \"metric\": 0.8190721456088772, \"running_time\": 32323, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"se_block\"], \"timestamp\": \"2018-09-07 06:58:45.366000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1134 | 0.8191 | (solution-8 , se_block)\"}, {\"id\": \"SAL-1134\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 32323, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"se_block\"], \"timestamp\": \"2018-09-07 06:58:45.366000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1134 | 0.8191 | (solution-8 , se_block)\"}, {\"id\": \"SAL-1134\", \"resource\": \"running_time_day\", \"time_or_count\": 86.87972222222223, \"actual_or_best\": \"actual\", \"metric\": 0.8190721456088772, \"running_time\": 32323, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"se_block\"], \"timestamp\": \"2018-09-07 06:58:45.366000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1134 | 0.8191 | (solution-8 , se_block)\"}, {\"id\": \"SAL-1134\", \"resource\": \"running_time_day\", \"time_or_count\": 86.87972222222223, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 32323, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"se_block\"], \"timestamp\": \"2018-09-07 06:58:45.366000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1134 | 0.8191 | (solution-8 , se_block)\"}, {\"id\": \"SAL-1133\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"actual\", \"metric\": 0.8145718432075254, \"running_time\": 36051, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"se_blocks\"], \"timestamp\": \"2018-09-07 07:39:52.520000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1133 | 0.8146 | (solution-8 , se_blocks)\"}, {\"id\": \"SAL-1133\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 36051, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"se_blocks\"], \"timestamp\": \"2018-09-07 07:39:52.520000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1133 | 0.8146 | (solution-8 , se_blocks)\"}, {\"id\": \"SAL-1133\", \"resource\": \"running_time_day\", \"time_or_count\": 86.87972222222223, \"actual_or_best\": \"actual\", \"metric\": 0.8145718432075254, \"running_time\": 36051, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"se_blocks\"], \"timestamp\": \"2018-09-07 07:39:52.520000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1133 | 0.8146 | (solution-8 , se_blocks)\"}, {\"id\": \"SAL-1133\", \"resource\": \"running_time_day\", \"time_or_count\": 86.87972222222223, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 36051, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"se_blocks\"], \"timestamp\": \"2018-09-07 07:39:52.520000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1133 | 0.8146 | (solution-8 , se_blocks)\"}, {\"id\": \"SAL-1137\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"actual\", \"metric\": 0.8166223194708951, \"running_time\": 36869, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"se_block\"], \"timestamp\": \"2018-09-07 08:39:45.135000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1137 | 0.8166 | (solution-8 , se_block)\"}, {\"id\": \"SAL-1137\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 36869, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"se_block\"], \"timestamp\": \"2018-09-07 08:39:45.135000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1137 | 0.8166 | (solution-8 , se_block)\"}, {\"id\": \"SAL-1137\", \"resource\": \"running_time_day\", \"time_or_count\": 86.87972222222223, \"actual_or_best\": \"actual\", \"metric\": 0.8166223194708951, \"running_time\": 36869, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"se_block\"], \"timestamp\": \"2018-09-07 08:39:45.135000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1137 | 0.8166 | (solution-8 , se_block)\"}, {\"id\": \"SAL-1137\", \"resource\": \"running_time_day\", \"time_or_count\": 86.87972222222223, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 36869, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"se_block\"], \"timestamp\": \"2018-09-07 08:39:45.135000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1137 | 0.8166 | (solution-8 , se_block)\"}, {\"id\": \"SAL-1087\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"actual\", \"metric\": 0.82724966345656, \"running_time\": 82706, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"hypercolumn\"], \"timestamp\": \"2018-09-07 12:02:50.951000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1087 | 0.8272 | (solution-8 , hypercolumn)\"}, {\"id\": \"SAL-1087\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 82706, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"hypercolumn\"], \"timestamp\": \"2018-09-07 12:02:50.951000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1087 | 0.8272 | (solution-8 , hypercolumn)\"}, {\"id\": \"SAL-1087\", \"resource\": \"running_time_day\", \"time_or_count\": 86.87972222222223, \"actual_or_best\": \"actual\", \"metric\": 0.82724966345656, \"running_time\": 82706, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"hypercolumn\"], \"timestamp\": \"2018-09-07 12:02:50.951000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1087 | 0.8272 | (solution-8 , hypercolumn)\"}, {\"id\": \"SAL-1087\", \"resource\": \"running_time_day\", \"time_or_count\": 86.87972222222223, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 82706, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"hypercolumn\"], \"timestamp\": \"2018-09-07 12:02:50.951000+00:00\", \"timestamp_day\": \"2018-09-07\", \"text\": \"SAL-1087 | 0.8272 | (solution-8 , hypercolumn)\"}, {\"id\": \"SAL-1190\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"actual\", \"metric\": 0.8199002975989482, \"running_time\": 41261, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 00:55:15.295000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1190 | 0.8199 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1190\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 41261, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 00:55:15.295000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1190 | 0.8199 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1190\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"actual\", \"metric\": 0.8199002975989482, \"running_time\": 41261, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 00:55:15.295000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1190 | 0.8199 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1190\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 41261, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 00:55:15.295000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1190 | 0.8199 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1208\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"actual\", \"metric\": 0.814923506715111, \"running_time\": 36814, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 07:00:55.684000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1208 | 0.8149 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1208\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 36814, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 07:00:55.684000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1208 | 0.8149 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1208\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"actual\", \"metric\": 0.814923506715111, \"running_time\": 36814, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 07:00:55.684000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1208 | 0.8149 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1208\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 36814, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 07:00:55.684000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1208 | 0.8149 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1213\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"actual\", \"metric\": 0.8179474827150988, \"running_time\": 38297, \"owner\": \"czakon\", \"tags\": [\"loss_design\", \"solution-8\"], \"timestamp\": \"2018-09-08 07:40:23.335000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1213 | 0.8179 | (loss_design , solution-8)\"}, {\"id\": \"SAL-1213\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 38297, \"owner\": \"czakon\", \"tags\": [\"loss_design\", \"solution-8\"], \"timestamp\": \"2018-09-08 07:40:23.335000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1213 | 0.8179 | (loss_design , solution-8)\"}, {\"id\": \"SAL-1213\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"actual\", \"metric\": 0.8179474827150988, \"running_time\": 38297, \"owner\": \"czakon\", \"tags\": [\"loss_design\", \"solution-8\"], \"timestamp\": \"2018-09-08 07:40:23.335000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1213 | 0.8179 | (loss_design , solution-8)\"}, {\"id\": \"SAL-1213\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 38297, \"owner\": \"czakon\", \"tags\": [\"loss_design\", \"solution-8\"], \"timestamp\": \"2018-09-08 07:40:23.335000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1213 | 0.8179 | (loss_design , solution-8)\"}, {\"id\": \"SAL-1150\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"actual\", \"metric\": 0.8183490837164, \"running_time\": 85545, \"owner\": \"czakon\", \"tags\": [\"hypercolumn\", \"solution-8\", \"se_block\"], \"timestamp\": \"2018-09-08 08:01:49.761000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1150 | 0.8183 | (hypercolumn , solution-8 , se_block)\"}, {\"id\": \"SAL-1150\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 85545, \"owner\": \"czakon\", \"tags\": [\"hypercolumn\", \"solution-8\", \"se_block\"], \"timestamp\": \"2018-09-08 08:01:49.761000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1150 | 0.8183 | (hypercolumn , solution-8 , se_block)\"}, {\"id\": \"SAL-1150\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"actual\", \"metric\": 0.8183490837164, \"running_time\": 85545, \"owner\": \"czakon\", \"tags\": [\"hypercolumn\", \"solution-8\", \"se_block\"], \"timestamp\": \"2018-09-08 08:01:49.761000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1150 | 0.8183 | (hypercolumn , solution-8 , se_block)\"}, {\"id\": \"SAL-1150\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 85545, \"owner\": \"czakon\", \"tags\": [\"hypercolumn\", \"solution-8\", \"se_block\"], \"timestamp\": \"2018-09-08 08:01:49.761000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1150 | 0.8183 | (hypercolumn , solution-8 , se_block)\"}, {\"id\": \"SAL-1210\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"actual\", \"metric\": 0.8148232940586763, \"running_time\": 40578, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 08:10:40.037000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1210 | 0.8148 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1210\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 40578, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 08:10:40.037000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1210 | 0.8148 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1210\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"actual\", \"metric\": 0.8148232940586763, \"running_time\": 40578, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 08:10:40.037000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1210 | 0.8148 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1210\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 40578, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 08:10:40.037000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1210 | 0.8148 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1211\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"actual\", \"metric\": 0.8176967447207327, \"running_time\": 42277, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 08:43:36.620000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1211 | 0.8177 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1211\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 42277, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 08:43:36.620000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1211 | 0.8177 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1211\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"actual\", \"metric\": 0.8176967447207327, \"running_time\": 42277, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 08:43:36.620000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1211 | 0.8177 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1211\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 42277, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 08:43:36.620000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1211 | 0.8177 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1214\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"actual\", \"metric\": 0.8192499846173011, \"running_time\": 43598, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 09:10:03.022000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1214 | 0.8192 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1214\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 43598, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 09:10:03.022000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1214 | 0.8192 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1214\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"actual\", \"metric\": 0.8192499846173011, \"running_time\": 43598, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 09:10:03.022000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1214 | 0.8192 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1214\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 43598, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 09:10:03.022000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1214 | 0.8192 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1212\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"actual\", \"metric\": 0.8170214067140602, \"running_time\": 45177, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 09:33:08.449000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1212 | 0.8170 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1212\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 45177, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 09:33:08.449000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1212 | 0.8170 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1212\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"actual\", \"metric\": 0.8170214067140602, \"running_time\": 45177, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 09:33:08.449000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1212 | 0.8170 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1212\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 45177, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 09:33:08.449000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1212 | 0.8170 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1215\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"actual\", \"metric\": 0.8199709829769799, \"running_time\": 41698, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 16:17:06.909000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1215 | 0.8200 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1215\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 41698, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 16:17:06.909000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1215 | 0.8200 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1215\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"actual\", \"metric\": 0.8199709829769799, \"running_time\": 41698, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 16:17:06.909000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1215 | 0.8200 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1215\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 41698, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"loss_design\"], \"timestamp\": \"2018-09-08 16:17:06.909000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1215 | 0.8200 | (solution-8 , loss_design)\"}, {\"id\": \"SAL-1239\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"actual\", \"metric\": 0.8196758227492861, \"running_time\": 13659, \"owner\": \"czakon\", \"tags\": [\"finetuning\", \"solution-8\"], \"timestamp\": \"2018-09-08 16:33:56.880000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1239 | 0.8197 | (finetuning , solution-8)\"}, {\"id\": \"SAL-1239\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 13659, \"owner\": \"czakon\", \"tags\": [\"finetuning\", \"solution-8\"], \"timestamp\": \"2018-09-08 16:33:56.880000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1239 | 0.8197 | (finetuning , solution-8)\"}, {\"id\": \"SAL-1239\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"actual\", \"metric\": 0.8196758227492861, \"running_time\": 13659, \"owner\": \"czakon\", \"tags\": [\"finetuning\", \"solution-8\"], \"timestamp\": \"2018-09-08 16:33:56.880000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1239 | 0.8197 | (finetuning , solution-8)\"}, {\"id\": \"SAL-1239\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 13659, \"owner\": \"czakon\", \"tags\": [\"finetuning\", \"solution-8\"], \"timestamp\": \"2018-09-08 16:33:56.880000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1239 | 0.8197 | (finetuning , solution-8)\"}, {\"id\": \"SAL-1240\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"actual\", \"metric\": 0.8220266743505125, \"running_time\": 21340, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-08 19:23:58.431000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1240 | 0.8220 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1240\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 21340, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-08 19:23:58.431000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1240 | 0.8220 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1240\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"actual\", \"metric\": 0.8220266743505125, \"running_time\": 21340, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-08 19:23:58.431000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1240 | 0.8220 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1240\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 21340, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-08 19:23:58.431000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1240 | 0.8220 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1242\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"actual\", \"metric\": 0.822651549600575, \"running_time\": 19630, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\", \"target_dilation\"], \"timestamp\": \"2018-09-08 19:50:58.409000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1242 | 0.8227 | (solution-8 , finetuning , target_dilation)\"}, {\"id\": \"SAL-1242\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 19630, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\", \"target_dilation\"], \"timestamp\": \"2018-09-08 19:50:58.409000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1242 | 0.8227 | (solution-8 , finetuning , target_dilation)\"}, {\"id\": \"SAL-1242\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"actual\", \"metric\": 0.822651549600575, \"running_time\": 19630, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\", \"target_dilation\"], \"timestamp\": \"2018-09-08 19:50:58.409000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1242 | 0.8227 | (solution-8 , finetuning , target_dilation)\"}, {\"id\": \"SAL-1242\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 19630, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\", \"target_dilation\"], \"timestamp\": \"2018-09-08 19:50:58.409000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1242 | 0.8227 | (solution-8 , finetuning , target_dilation)\"}, {\"id\": \"SAL-1241\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"actual\", \"metric\": 0.8223014618816716, \"running_time\": 27158, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-08 21:51:14.043000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1241 | 0.8223 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1241\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 27158, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-08 21:51:14.043000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1241 | 0.8223 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1241\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"actual\", \"metric\": 0.8223014618816716, \"running_time\": 27158, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-08 21:51:14.043000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1241 | 0.8223 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1241\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 27158, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-08 21:51:14.043000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1241 | 0.8223 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1244\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"actual\", \"metric\": 0.8197997097547322, \"running_time\": 18512, \"owner\": \"czakon\", \"tags\": [\"finetuning\", \"solution-8\"], \"timestamp\": \"2018-09-08 21:51:48.298000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1244 | 0.8198 | (finetuning , solution-8)\"}, {\"id\": \"SAL-1244\", \"resource\": \"experiment_count_day\", \"time_or_count\": 14.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 18512, \"owner\": \"czakon\", \"tags\": [\"finetuning\", \"solution-8\"], \"timestamp\": \"2018-09-08 21:51:48.298000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1244 | 0.8198 | (finetuning , solution-8)\"}, {\"id\": \"SAL-1244\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"actual\", \"metric\": 0.8197997097547322, \"running_time\": 18512, \"owner\": \"czakon\", \"tags\": [\"finetuning\", \"solution-8\"], \"timestamp\": \"2018-09-08 21:51:48.298000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1244 | 0.8198 | (finetuning , solution-8)\"}, {\"id\": \"SAL-1244\", \"resource\": \"running_time_day\", \"time_or_count\": 143.20666666666668, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 18512, \"owner\": \"czakon\", \"tags\": [\"finetuning\", \"solution-8\"], \"timestamp\": \"2018-09-08 21:51:48.298000+00:00\", \"timestamp_day\": \"2018-09-08\", \"text\": \"SAL-1244 | 0.8198 | (finetuning , solution-8)\"}, {\"id\": \"SAL-1319\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"actual\", \"metric\": 0.7176510093301699, \"running_time\": 21604, \"owner\": \"piotrtarasiewicz\", \"tags\": [\"solution-8\", \"depth_channel\"], \"timestamp\": \"2018-09-09 17:36:57.918000+00:00\", \"timestamp_day\": \"2018-09-09\", \"text\": \"SAL-1319 | 0.7177 | (solution-8 , depth_channel)\"}, {\"id\": \"SAL-1319\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 21604, \"owner\": \"piotrtarasiewicz\", \"tags\": [\"solution-8\", \"depth_channel\"], \"timestamp\": \"2018-09-09 17:36:57.918000+00:00\", \"timestamp_day\": \"2018-09-09\", \"text\": \"SAL-1319 | 0.7177 | (solution-8 , depth_channel)\"}, {\"id\": \"SAL-1319\", \"resource\": \"running_time_day\", \"time_or_count\": 6.001111111111111, \"actual_or_best\": \"actual\", \"metric\": 0.7176510093301699, \"running_time\": 21604, \"owner\": \"piotrtarasiewicz\", \"tags\": [\"solution-8\", \"depth_channel\"], \"timestamp\": \"2018-09-09 17:36:57.918000+00:00\", \"timestamp_day\": \"2018-09-09\", \"text\": \"SAL-1319 | 0.7177 | (solution-8 , depth_channel)\"}, {\"id\": \"SAL-1319\", \"resource\": \"running_time_day\", \"time_or_count\": 6.001111111111111, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 21604, \"owner\": \"piotrtarasiewicz\", \"tags\": [\"solution-8\", \"depth_channel\"], \"timestamp\": \"2018-09-09 17:36:57.918000+00:00\", \"timestamp_day\": \"2018-09-09\", \"text\": \"SAL-1319 | 0.7177 | (solution-8 , depth_channel)\"}, {\"id\": \"SAL-1400\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"actual\", \"metric\": 0.8200758479619048, \"running_time\": 76600, \"owner\": \"piotrtarasiewicz\", \"tags\": [\"solution-8\", \"depth_channel\"], \"timestamp\": \"2018-09-10 18:02:24.117000+00:00\", \"timestamp_day\": \"2018-09-10\", \"text\": \"SAL-1400 | 0.8201 | (solution-8 , depth_channel)\"}, {\"id\": \"SAL-1400\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 76600, \"owner\": \"piotrtarasiewicz\", \"tags\": [\"solution-8\", \"depth_channel\"], \"timestamp\": \"2018-09-10 18:02:24.117000+00:00\", \"timestamp_day\": \"2018-09-10\", \"text\": \"SAL-1400 | 0.8201 | (solution-8 , depth_channel)\"}, {\"id\": \"SAL-1400\", \"resource\": \"running_time_day\", \"time_or_count\": 21.27777777777778, \"actual_or_best\": \"actual\", \"metric\": 0.8200758479619048, \"running_time\": 76600, \"owner\": \"piotrtarasiewicz\", \"tags\": [\"solution-8\", \"depth_channel\"], \"timestamp\": \"2018-09-10 18:02:24.117000+00:00\", \"timestamp_day\": \"2018-09-10\", \"text\": \"SAL-1400 | 0.8201 | (solution-8 , depth_channel)\"}, {\"id\": \"SAL-1400\", \"resource\": \"running_time_day\", \"time_or_count\": 21.27777777777778, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 76600, \"owner\": \"piotrtarasiewicz\", \"tags\": [\"solution-8\", \"depth_channel\"], \"timestamp\": \"2018-09-10 18:02:24.117000+00:00\", \"timestamp_day\": \"2018-09-10\", \"text\": \"SAL-1400 | 0.8201 | (solution-8 , depth_channel)\"}, {\"id\": \"SAL-1456\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"actual\", \"metric\": 0.8039992016004011, \"running_time\": 28150, \"owner\": \"czakon\", \"tags\": [\"open\", \"solution-4\"], \"timestamp\": \"2018-09-11 03:27:29.665000+00:00\", \"timestamp_day\": \"2018-09-11\", \"text\": \"SAL-1456 | 0.8040 | (open , solution-4)\"}, {\"id\": \"SAL-1456\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 28150, \"owner\": \"czakon\", \"tags\": [\"open\", \"solution-4\"], \"timestamp\": \"2018-09-11 03:27:29.665000+00:00\", \"timestamp_day\": \"2018-09-11\", \"text\": \"SAL-1456 | 0.8040 | (open , solution-4)\"}, {\"id\": \"SAL-1456\", \"resource\": \"running_time_day\", \"time_or_count\": 35.14083333333333, \"actual_or_best\": \"actual\", \"metric\": 0.8039992016004011, \"running_time\": 28150, \"owner\": \"czakon\", \"tags\": [\"open\", \"solution-4\"], \"timestamp\": \"2018-09-11 03:27:29.665000+00:00\", \"timestamp_day\": \"2018-09-11\", \"text\": \"SAL-1456 | 0.8040 | (open , solution-4)\"}, {\"id\": \"SAL-1456\", \"resource\": \"running_time_day\", \"time_or_count\": 35.14083333333333, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 28150, \"owner\": \"czakon\", \"tags\": [\"open\", \"solution-4\"], \"timestamp\": \"2018-09-11 03:27:29.665000+00:00\", \"timestamp_day\": \"2018-09-11\", \"text\": \"SAL-1456 | 0.8040 | (open , solution-4)\"}, {\"id\": \"SAL-1414\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"actual\", \"metric\": 0.8297236892064479, \"running_time\": 97656, \"owner\": \"piotrtarasiewicz\", \"tags\": [\"depth_channel\", \"solution-8\"], \"timestamp\": \"2018-09-11 12:59:54.914000+00:00\", \"timestamp_day\": \"2018-09-11\", \"text\": \"SAL-1414 | 0.8297 | (depth_channel , solution-8)\"}, {\"id\": \"SAL-1414\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 97656, \"owner\": \"piotrtarasiewicz\", \"tags\": [\"depth_channel\", \"solution-8\"], \"timestamp\": \"2018-09-11 12:59:54.914000+00:00\", \"timestamp_day\": \"2018-09-11\", \"text\": \"SAL-1414 | 0.8297 | (depth_channel , solution-8)\"}, {\"id\": \"SAL-1414\", \"resource\": \"running_time_day\", \"time_or_count\": 35.14083333333333, \"actual_or_best\": \"actual\", \"metric\": 0.8297236892064479, \"running_time\": 97656, \"owner\": \"piotrtarasiewicz\", \"tags\": [\"depth_channel\", \"solution-8\"], \"timestamp\": \"2018-09-11 12:59:54.914000+00:00\", \"timestamp_day\": \"2018-09-11\", \"text\": \"SAL-1414 | 0.8297 | (depth_channel , solution-8)\"}, {\"id\": \"SAL-1414\", \"resource\": \"running_time_day\", \"time_or_count\": 35.14083333333333, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 97656, \"owner\": \"piotrtarasiewicz\", \"tags\": [\"depth_channel\", \"solution-8\"], \"timestamp\": \"2018-09-11 12:59:54.914000+00:00\", \"timestamp_day\": \"2018-09-11\", \"text\": \"SAL-1414 | 0.8297 | (depth_channel , solution-8)\"}, {\"id\": \"SAL-1488\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"actual\", \"metric\": 0.828376502439471, \"running_time\": 701, \"owner\": \"piotrtarasiewicz\", \"tags\": [\"solution-8\", \"depth_channel\"], \"timestamp\": \"2018-09-11 13:32:50.752000+00:00\", \"timestamp_day\": \"2018-09-11\", \"text\": \"SAL-1488 | 0.8284 | (solution-8 , depth_channel)\"}, {\"id\": \"SAL-1488\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 701, \"owner\": \"piotrtarasiewicz\", \"tags\": [\"solution-8\", \"depth_channel\"], \"timestamp\": \"2018-09-11 13:32:50.752000+00:00\", \"timestamp_day\": \"2018-09-11\", \"text\": \"SAL-1488 | 0.8284 | (solution-8 , depth_channel)\"}, {\"id\": \"SAL-1488\", \"resource\": \"running_time_day\", \"time_or_count\": 35.14083333333333, \"actual_or_best\": \"actual\", \"metric\": 0.828376502439471, \"running_time\": 701, \"owner\": \"piotrtarasiewicz\", \"tags\": [\"solution-8\", \"depth_channel\"], \"timestamp\": \"2018-09-11 13:32:50.752000+00:00\", \"timestamp_day\": \"2018-09-11\", \"text\": \"SAL-1488 | 0.8284 | (solution-8 , depth_channel)\"}, {\"id\": \"SAL-1488\", \"resource\": \"running_time_day\", \"time_or_count\": 35.14083333333333, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 701, \"owner\": \"piotrtarasiewicz\", \"tags\": [\"solution-8\", \"depth_channel\"], \"timestamp\": \"2018-09-11 13:32:50.752000+00:00\", \"timestamp_day\": \"2018-09-11\", \"text\": \"SAL-1488 | 0.8284 | (solution-8 , depth_channel)\"}, {\"id\": \"SAL-1515\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"actual\", \"metric\": 0.8129739934837387, \"running_time\": 26888, \"owner\": \"neyo\", \"tags\": [\"solution-8\"], \"timestamp\": \"2018-09-12 16:42:18.739000+00:00\", \"timestamp_day\": \"2018-09-12\", \"text\": \"SAL-1515 | 0.8130 | (solution-8)\"}, {\"id\": \"SAL-1515\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 26888, \"owner\": \"neyo\", \"tags\": [\"solution-8\"], \"timestamp\": \"2018-09-12 16:42:18.739000+00:00\", \"timestamp_day\": \"2018-09-12\", \"text\": \"SAL-1515 | 0.8130 | (solution-8)\"}, {\"id\": \"SAL-1515\", \"resource\": \"running_time_day\", \"time_or_count\": 15.563055555555556, \"actual_or_best\": \"actual\", \"metric\": 0.8129739934837387, \"running_time\": 26888, \"owner\": \"neyo\", \"tags\": [\"solution-8\"], \"timestamp\": \"2018-09-12 16:42:18.739000+00:00\", \"timestamp_day\": \"2018-09-12\", \"text\": \"SAL-1515 | 0.8130 | (solution-8)\"}, {\"id\": \"SAL-1515\", \"resource\": \"running_time_day\", \"time_or_count\": 15.563055555555556, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 26888, \"owner\": \"neyo\", \"tags\": [\"solution-8\"], \"timestamp\": \"2018-09-12 16:42:18.739000+00:00\", \"timestamp_day\": \"2018-09-12\", \"text\": \"SAL-1515 | 0.8130 | (solution-8)\"}, {\"id\": \"SAL-1527\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"actual\", \"metric\": 0.8163459936698318, \"running_time\": 29139, \"owner\": \"neyo\", \"tags\": [\"solution-8\", \"center_grad_l2\"], \"timestamp\": \"2018-09-12 22:30:09.671000+00:00\", \"timestamp_day\": \"2018-09-12\", \"text\": \"SAL-1527 | 0.8163 | (solution-8 , center_grad_l2)\"}, {\"id\": \"SAL-1527\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 29139, \"owner\": \"neyo\", \"tags\": [\"solution-8\", \"center_grad_l2\"], \"timestamp\": \"2018-09-12 22:30:09.671000+00:00\", \"timestamp_day\": \"2018-09-12\", \"text\": \"SAL-1527 | 0.8163 | (solution-8 , center_grad_l2)\"}, {\"id\": \"SAL-1527\", \"resource\": \"running_time_day\", \"time_or_count\": 15.563055555555556, \"actual_or_best\": \"actual\", \"metric\": 0.8163459936698318, \"running_time\": 29139, \"owner\": \"neyo\", \"tags\": [\"solution-8\", \"center_grad_l2\"], \"timestamp\": \"2018-09-12 22:30:09.671000+00:00\", \"timestamp_day\": \"2018-09-12\", \"text\": \"SAL-1527 | 0.8163 | (solution-8 , center_grad_l2)\"}, {\"id\": \"SAL-1527\", \"resource\": \"running_time_day\", \"time_or_count\": 15.563055555555556, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 29139, \"owner\": \"neyo\", \"tags\": [\"solution-8\", \"center_grad_l2\"], \"timestamp\": \"2018-09-12 22:30:09.671000+00:00\", \"timestamp_day\": \"2018-09-12\", \"text\": \"SAL-1527 | 0.8163 | (solution-8 , center_grad_l2)\"}, {\"id\": \"SAL-1576\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"actual\", \"metric\": 0.8281494387941164, \"running_time\": 66059, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-14 12:20:27.508000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1576 | 0.8281 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1576\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 66059, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-14 12:20:27.508000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1576 | 0.8281 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1576\", \"resource\": \"running_time_day\", \"time_or_count\": 112.37916666666666, \"actual_or_best\": \"actual\", \"metric\": 0.8281494387941164, \"running_time\": 66059, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-14 12:20:27.508000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1576 | 0.8281 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1576\", \"resource\": \"running_time_day\", \"time_or_count\": 112.37916666666666, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 66059, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-14 12:20:27.508000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1576 | 0.8281 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1574\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"actual\", \"metric\": 0.8252234618426523, \"running_time\": 67883, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-14 12:44:32.553000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1574 | 0.8252 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1574\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 67883, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-14 12:44:32.553000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1574 | 0.8252 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1574\", \"resource\": \"running_time_day\", \"time_or_count\": 112.37916666666666, \"actual_or_best\": \"actual\", \"metric\": 0.8252234618426523, \"running_time\": 67883, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-14 12:44:32.553000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1574 | 0.8252 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1574\", \"resource\": \"running_time_day\", \"time_or_count\": 112.37916666666666, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 67883, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-14 12:44:32.553000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1574 | 0.8252 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1573\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"actual\", \"metric\": 0.8283975879927903, \"running_time\": 77278, \"owner\": \"czakon\", \"tags\": [\"finetuning\", \"solution-8\"], \"timestamp\": \"2018-09-14 15:19:10.652000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1573 | 0.8284 | (finetuning , solution-8)\"}, {\"id\": \"SAL-1573\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 77278, \"owner\": \"czakon\", \"tags\": [\"finetuning\", \"solution-8\"], \"timestamp\": \"2018-09-14 15:19:10.652000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1573 | 0.8284 | (finetuning , solution-8)\"}, {\"id\": \"SAL-1573\", \"resource\": \"running_time_day\", \"time_or_count\": 112.37916666666666, \"actual_or_best\": \"actual\", \"metric\": 0.8283975879927903, \"running_time\": 77278, \"owner\": \"czakon\", \"tags\": [\"finetuning\", \"solution-8\"], \"timestamp\": \"2018-09-14 15:19:10.652000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1573 | 0.8284 | (finetuning , solution-8)\"}, {\"id\": \"SAL-1573\", \"resource\": \"running_time_day\", \"time_or_count\": 112.37916666666666, \"actual_or_best\": \"best\", \"metric\": 0.8313981522752137, \"running_time\": 77278, \"owner\": \"czakon\", \"tags\": [\"finetuning\", \"solution-8\"], \"timestamp\": \"2018-09-14 15:19:10.652000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1573 | 0.8284 | (finetuning , solution-8)\"}, {\"id\": \"SAL-1575\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"actual\", \"metric\": 0.8314486525506015, \"running_time\": 79554, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-14 16:00:37.741000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1575 | 0.8314 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1575\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"best\", \"metric\": 0.8314486525506015, \"running_time\": 79554, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-14 16:00:37.741000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1575 | 0.8314 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1575\", \"resource\": \"running_time_day\", \"time_or_count\": 112.37916666666666, \"actual_or_best\": \"actual\", \"metric\": 0.8314486525506015, \"running_time\": 79554, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-14 16:00:37.741000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1575 | 0.8314 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1575\", \"resource\": \"running_time_day\", \"time_or_count\": 112.37916666666666, \"actual_or_best\": \"best\", \"metric\": 0.8314486525506015, \"running_time\": 79554, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-14 16:00:37.741000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1575 | 0.8314 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1572\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"actual\", \"metric\": 0.828373238305772, \"running_time\": 83678, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-14 17:04:02.142000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1572 | 0.8284 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1572\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"best\", \"metric\": 0.8314486525506015, \"running_time\": 83678, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-14 17:04:02.142000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1572 | 0.8284 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1572\", \"resource\": \"running_time_day\", \"time_or_count\": 112.37916666666666, \"actual_or_best\": \"actual\", \"metric\": 0.828373238305772, \"running_time\": 83678, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-14 17:04:02.142000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1572 | 0.8284 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1572\", \"resource\": \"running_time_day\", \"time_or_count\": 112.37916666666666, \"actual_or_best\": \"best\", \"metric\": 0.8314486525506015, \"running_time\": 83678, \"owner\": \"czakon\", \"tags\": [\"solution-8\", \"finetuning\"], \"timestamp\": \"2018-09-14 17:04:02.142000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1572 | 0.8284 | (solution-8 , finetuning)\"}, {\"id\": \"SAL-1598\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"actual\", \"metric\": 0.8281488384936662, \"running_time\": 30113, \"owner\": \"neyo\", \"tags\": [\"solution-8\", \"center_grad_l2\"], \"timestamp\": \"2018-09-14 18:47:53.401000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1598 | 0.8281 | (solution-8 , center_grad_l2)\"}, {\"id\": \"SAL-1598\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"best\", \"metric\": 0.8314486525506015, \"running_time\": 30113, \"owner\": \"neyo\", \"tags\": [\"solution-8\", \"center_grad_l2\"], \"timestamp\": \"2018-09-14 18:47:53.401000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1598 | 0.8281 | (solution-8 , center_grad_l2)\"}, {\"id\": \"SAL-1598\", \"resource\": \"running_time_day\", \"time_or_count\": 112.37916666666666, \"actual_or_best\": \"actual\", \"metric\": 0.8281488384936662, \"running_time\": 30113, \"owner\": \"neyo\", \"tags\": [\"solution-8\", \"center_grad_l2\"], \"timestamp\": \"2018-09-14 18:47:53.401000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1598 | 0.8281 | (solution-8 , center_grad_l2)\"}, {\"id\": \"SAL-1598\", \"resource\": \"running_time_day\", \"time_or_count\": 112.37916666666666, \"actual_or_best\": \"best\", \"metric\": 0.8314486525506015, \"running_time\": 30113, \"owner\": \"neyo\", \"tags\": [\"solution-8\", \"center_grad_l2\"], \"timestamp\": \"2018-09-14 18:47:53.401000+00:00\", \"timestamp_day\": \"2018-09-14\", \"text\": \"SAL-1598 | 0.8281 | (solution-8 , center_grad_l2)\"}, {\"id\": \"SAL-1599\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"actual\", \"metric\": 0.8104486045265656, \"running_time\": 54969, \"owner\": \"czakon\", \"tags\": [\"open\", \"solution-5\"], \"timestamp\": \"2018-09-15 01:42:57.089000+00:00\", \"timestamp_day\": \"2018-09-15\", \"text\": \"SAL-1599 | 0.8104 | (open , solution-5)\"}, {\"id\": \"SAL-1599\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"best\", \"metric\": 0.8314486525506015, \"running_time\": 54969, \"owner\": \"czakon\", \"tags\": [\"open\", \"solution-5\"], \"timestamp\": \"2018-09-15 01:42:57.089000+00:00\", \"timestamp_day\": \"2018-09-15\", \"text\": \"SAL-1599 | 0.8104 | (open , solution-5)\"}, {\"id\": \"SAL-1599\", \"resource\": \"running_time_day\", \"time_or_count\": 15.269166666666667, \"actual_or_best\": \"actual\", \"metric\": 0.8104486045265656, \"running_time\": 54969, \"owner\": \"czakon\", \"tags\": [\"open\", \"solution-5\"], \"timestamp\": \"2018-09-15 01:42:57.089000+00:00\", \"timestamp_day\": \"2018-09-15\", \"text\": \"SAL-1599 | 0.8104 | (open , solution-5)\"}, {\"id\": \"SAL-1599\", \"resource\": \"running_time_day\", \"time_or_count\": 15.269166666666667, \"actual_or_best\": \"best\", \"metric\": 0.8314486525506015, \"running_time\": 54969, \"owner\": \"czakon\", \"tags\": [\"open\", \"solution-5\"], \"timestamp\": \"2018-09-15 01:42:57.089000+00:00\", \"timestamp_day\": \"2018-09-15\", \"text\": \"SAL-1599 | 0.8104 | (open , solution-5)\"}, {\"id\": \"SAL-1584\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"actual\", \"metric\": 0.828673200937069, \"running_time\": 144235, \"owner\": \"czakon\", \"tags\": [\"auxiliary_data\", \"solution-8\"], \"timestamp\": \"2018-09-16 00:12:47.466000+00:00\", \"timestamp_day\": \"2018-09-16\", \"text\": \"SAL-1584 | 0.8287 | (auxiliary_data , solution-8)\"}, {\"id\": \"SAL-1584\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"best\", \"metric\": 0.8314486525506015, \"running_time\": 144235, \"owner\": \"czakon\", \"tags\": [\"auxiliary_data\", \"solution-8\"], \"timestamp\": \"2018-09-16 00:12:47.466000+00:00\", \"timestamp_day\": \"2018-09-16\", \"text\": \"SAL-1584 | 0.8287 | (auxiliary_data , solution-8)\"}, {\"id\": \"SAL-1584\", \"resource\": \"running_time_day\", \"time_or_count\": 40.06527777777778, \"actual_or_best\": \"actual\", \"metric\": 0.828673200937069, \"running_time\": 144235, \"owner\": \"czakon\", \"tags\": [\"auxiliary_data\", \"solution-8\"], \"timestamp\": \"2018-09-16 00:12:47.466000+00:00\", \"timestamp_day\": \"2018-09-16\", \"text\": \"SAL-1584 | 0.8287 | (auxiliary_data , solution-8)\"}, {\"id\": \"SAL-1584\", \"resource\": \"running_time_day\", \"time_or_count\": 40.06527777777778, \"actual_or_best\": \"best\", \"metric\": 0.8314486525506015, \"running_time\": 144235, \"owner\": \"czakon\", \"tags\": [\"auxiliary_data\", \"solution-8\"], \"timestamp\": \"2018-09-16 00:12:47.466000+00:00\", \"timestamp_day\": \"2018-09-16\", \"text\": \"SAL-1584 | 0.8287 | (auxiliary_data , solution-8)\"}, {\"id\": \"SAL-1638\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"actual\", \"metric\": 0.8321732151942047, \"running_time\": 78059, \"owner\": \"neyo\", \"tags\": [\"solution-8\", \"dropout_encoder4\"], \"timestamp\": \"2018-09-17 10:02:23.221000+00:00\", \"timestamp_day\": \"2018-09-17\", \"text\": \"SAL-1638 | 0.8322 | (solution-8 , dropout_encoder4)\"}, {\"id\": \"SAL-1638\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"best\", \"metric\": 0.8321732151942047, \"running_time\": 78059, \"owner\": \"neyo\", \"tags\": [\"solution-8\", \"dropout_encoder4\"], \"timestamp\": \"2018-09-17 10:02:23.221000+00:00\", \"timestamp_day\": \"2018-09-17\", \"text\": \"SAL-1638 | 0.8322 | (solution-8 , dropout_encoder4)\"}, {\"id\": \"SAL-1638\", \"resource\": \"running_time_day\", \"time_or_count\": 21.683055555555555, \"actual_or_best\": \"actual\", \"metric\": 0.8321732151942047, \"running_time\": 78059, \"owner\": \"neyo\", \"tags\": [\"solution-8\", \"dropout_encoder4\"], \"timestamp\": \"2018-09-17 10:02:23.221000+00:00\", \"timestamp_day\": \"2018-09-17\", \"text\": \"SAL-1638 | 0.8322 | (solution-8 , dropout_encoder4)\"}, {\"id\": \"SAL-1638\", \"resource\": \"running_time_day\", \"time_or_count\": 21.683055555555555, \"actual_or_best\": \"best\", \"metric\": 0.8321732151942047, \"running_time\": 78059, \"owner\": \"neyo\", \"tags\": [\"solution-8\", \"dropout_encoder4\"], \"timestamp\": \"2018-09-17 10:02:23.221000+00:00\", \"timestamp_day\": \"2018-09-17\", \"text\": \"SAL-1638 | 0.8322 | (solution-8 , dropout_encoder4)\"}, {\"id\": \"SAL-1844\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"actual\", \"metric\": 0.829599614607111, \"running_time\": 51383, \"owner\": \"neyo\", \"tags\": [\"solution-9\"], \"timestamp\": \"2018-09-20 04:17:03.484000+00:00\", \"timestamp_day\": \"2018-09-20\", \"text\": \"SAL-1844 | 0.8296 | (solution-9)\"}, {\"id\": \"SAL-1844\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"best\", \"metric\": 0.8321732151942047, \"running_time\": 51383, \"owner\": \"neyo\", \"tags\": [\"solution-9\"], \"timestamp\": \"2018-09-20 04:17:03.484000+00:00\", \"timestamp_day\": \"2018-09-20\", \"text\": \"SAL-1844 | 0.8296 | (solution-9)\"}, {\"id\": \"SAL-1844\", \"resource\": \"running_time_day\", \"time_or_count\": 39.208333333333336, \"actual_or_best\": \"actual\", \"metric\": 0.829599614607111, \"running_time\": 51383, \"owner\": \"neyo\", \"tags\": [\"solution-9\"], \"timestamp\": \"2018-09-20 04:17:03.484000+00:00\", \"timestamp_day\": \"2018-09-20\", \"text\": \"SAL-1844 | 0.8296 | (solution-9)\"}, {\"id\": \"SAL-1844\", \"resource\": \"running_time_day\", \"time_or_count\": 39.208333333333336, \"actual_or_best\": \"best\", \"metric\": 0.8321732151942047, \"running_time\": 51383, \"owner\": \"neyo\", \"tags\": [\"solution-9\"], \"timestamp\": \"2018-09-20 04:17:03.484000+00:00\", \"timestamp_day\": \"2018-09-20\", \"text\": \"SAL-1844 | 0.8296 | (solution-9)\"}, {\"id\": \"SAL-1803\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"actual\", \"metric\": 0.8291483387435412, \"running_time\": 89767, \"owner\": \"czakon\", \"tags\": [\"depth\", \"solution-9\"], \"timestamp\": \"2018-09-20 07:17:09.993000+00:00\", \"timestamp_day\": \"2018-09-20\", \"text\": \"SAL-1803 | 0.8291 | (depth , solution-9)\"}, {\"id\": \"SAL-1803\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"best\", \"metric\": 0.8321732151942047, \"running_time\": 89767, \"owner\": \"czakon\", \"tags\": [\"depth\", \"solution-9\"], \"timestamp\": \"2018-09-20 07:17:09.993000+00:00\", \"timestamp_day\": \"2018-09-20\", \"text\": \"SAL-1803 | 0.8291 | (depth , solution-9)\"}, {\"id\": \"SAL-1803\", \"resource\": \"running_time_day\", \"time_or_count\": 39.208333333333336, \"actual_or_best\": \"actual\", \"metric\": 0.8291483387435412, \"running_time\": 89767, \"owner\": \"czakon\", \"tags\": [\"depth\", \"solution-9\"], \"timestamp\": \"2018-09-20 07:17:09.993000+00:00\", \"timestamp_day\": \"2018-09-20\", \"text\": \"SAL-1803 | 0.8291 | (depth , solution-9)\"}, {\"id\": \"SAL-1803\", \"resource\": \"running_time_day\", \"time_or_count\": 39.208333333333336, \"actual_or_best\": \"best\", \"metric\": 0.8321732151942047, \"running_time\": 89767, \"owner\": \"czakon\", \"tags\": [\"depth\", \"solution-9\"], \"timestamp\": \"2018-09-20 07:17:09.993000+00:00\", \"timestamp_day\": \"2018-09-20\", \"text\": \"SAL-1803 | 0.8291 | (depth , solution-9)\"}, {\"id\": \"SAL-1840\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"actual\", \"metric\": 0.829000964982974, \"running_time\": 131409, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"depth\"], \"timestamp\": \"2018-09-21 01:11:18.690000+00:00\", \"timestamp_day\": \"2018-09-21\", \"text\": \"SAL-1840 | 0.8290 | (solution-9 , depth)\"}, {\"id\": \"SAL-1840\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"best\", \"metric\": 0.8321732151942047, \"running_time\": 131409, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"depth\"], \"timestamp\": \"2018-09-21 01:11:18.690000+00:00\", \"timestamp_day\": \"2018-09-21\", \"text\": \"SAL-1840 | 0.8290 | (solution-9 , depth)\"}, {\"id\": \"SAL-1840\", \"resource\": \"running_time_day\", \"time_or_count\": 65.56444444444445, \"actual_or_best\": \"actual\", \"metric\": 0.829000964982974, \"running_time\": 131409, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"depth\"], \"timestamp\": \"2018-09-21 01:11:18.690000+00:00\", \"timestamp_day\": \"2018-09-21\", \"text\": \"SAL-1840 | 0.8290 | (solution-9 , depth)\"}, {\"id\": \"SAL-1840\", \"resource\": \"running_time_day\", \"time_or_count\": 65.56444444444445, \"actual_or_best\": \"best\", \"metric\": 0.8321732151942047, \"running_time\": 131409, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"depth\"], \"timestamp\": \"2018-09-21 01:11:18.690000+00:00\", \"timestamp_day\": \"2018-09-21\", \"text\": \"SAL-1840 | 0.8290 | (solution-9 , depth)\"}, {\"id\": \"SAL-1890\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"actual\", \"metric\": 0.8343003423213317, \"running_time\": 92010, \"owner\": \"neyo\", \"tags\": [\"augmentations\", \"solution-9\"], \"timestamp\": \"2018-09-21 14:10:04.187000+00:00\", \"timestamp_day\": \"2018-09-21\", \"text\": \"SAL-1890 | 0.8343 | (augmentations , solution-9)\"}, {\"id\": \"SAL-1890\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"best\", \"metric\": 0.8343003423213317, \"running_time\": 92010, \"owner\": \"neyo\", \"tags\": [\"augmentations\", \"solution-9\"], \"timestamp\": \"2018-09-21 14:10:04.187000+00:00\", \"timestamp_day\": \"2018-09-21\", \"text\": \"SAL-1890 | 0.8343 | (augmentations , solution-9)\"}, {\"id\": \"SAL-1890\", \"resource\": \"running_time_day\", \"time_or_count\": 65.56444444444445, \"actual_or_best\": \"actual\", \"metric\": 0.8343003423213317, \"running_time\": 92010, \"owner\": \"neyo\", \"tags\": [\"augmentations\", \"solution-9\"], \"timestamp\": \"2018-09-21 14:10:04.187000+00:00\", \"timestamp_day\": \"2018-09-21\", \"text\": \"SAL-1890 | 0.8343 | (augmentations , solution-9)\"}, {\"id\": \"SAL-1890\", \"resource\": \"running_time_day\", \"time_or_count\": 65.56444444444445, \"actual_or_best\": \"best\", \"metric\": 0.8343003423213317, \"running_time\": 92010, \"owner\": \"neyo\", \"tags\": [\"augmentations\", \"solution-9\"], \"timestamp\": \"2018-09-21 14:10:04.187000+00:00\", \"timestamp_day\": \"2018-09-21\", \"text\": \"SAL-1890 | 0.8343 | (augmentations , solution-9)\"}, {\"id\": \"SAL-1980\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"actual\", \"metric\": 0.8459236597917258, \"running_time\": 12613, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-9\"], \"timestamp\": \"2018-09-21 23:35:10.612000+00:00\", \"timestamp_day\": \"2018-09-21\", \"text\": \"SAL-1980 | 0.8459 | (stacking , solution-9)\"}, {\"id\": \"SAL-1980\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"best\", \"metric\": 0.8459236597917258, \"running_time\": 12613, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-9\"], \"timestamp\": \"2018-09-21 23:35:10.612000+00:00\", \"timestamp_day\": \"2018-09-21\", \"text\": \"SAL-1980 | 0.8459 | (stacking , solution-9)\"}, {\"id\": \"SAL-1980\", \"resource\": \"running_time_day\", \"time_or_count\": 65.56444444444445, \"actual_or_best\": \"actual\", \"metric\": 0.8459236597917258, \"running_time\": 12613, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-9\"], \"timestamp\": \"2018-09-21 23:35:10.612000+00:00\", \"timestamp_day\": \"2018-09-21\", \"text\": \"SAL-1980 | 0.8459 | (stacking , solution-9)\"}, {\"id\": \"SAL-1980\", \"resource\": \"running_time_day\", \"time_or_count\": 65.56444444444445, \"actual_or_best\": \"best\", \"metric\": 0.8459236597917258, \"running_time\": 12613, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-9\"], \"timestamp\": \"2018-09-21 23:35:10.612000+00:00\", \"timestamp_day\": \"2018-09-21\", \"text\": \"SAL-1980 | 0.8459 | (stacking , solution-9)\"}, {\"id\": \"SAL-1933\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"actual\", \"metric\": 0.8283267900584242, \"running_time\": 88484, \"owner\": \"neyo\", \"tags\": [\"augmentations\", \"always_sharp\", \"solution-9\"], \"timestamp\": \"2018-09-22 09:53:23.703000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1933 | 0.8283 | (augmentations , always_sharp , solution-9)\"}, {\"id\": \"SAL-1933\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"best\", \"metric\": 0.8459236597917258, \"running_time\": 88484, \"owner\": \"neyo\", \"tags\": [\"augmentations\", \"always_sharp\", \"solution-9\"], \"timestamp\": \"2018-09-22 09:53:23.703000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1933 | 0.8283 | (augmentations , always_sharp , solution-9)\"}, {\"id\": \"SAL-1933\", \"resource\": \"running_time_day\", \"time_or_count\": 129.62444444444444, \"actual_or_best\": \"actual\", \"metric\": 0.8283267900584242, \"running_time\": 88484, \"owner\": \"neyo\", \"tags\": [\"augmentations\", \"always_sharp\", \"solution-9\"], \"timestamp\": \"2018-09-22 09:53:23.703000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1933 | 0.8283 | (augmentations , always_sharp , solution-9)\"}, {\"id\": \"SAL-1933\", \"resource\": \"running_time_day\", \"time_or_count\": 129.62444444444444, \"actual_or_best\": \"best\", \"metric\": 0.8459236597917258, \"running_time\": 88484, \"owner\": \"neyo\", \"tags\": [\"augmentations\", \"always_sharp\", \"solution-9\"], \"timestamp\": \"2018-09-22 09:53:23.703000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1933 | 0.8283 | (augmentations , always_sharp , solution-9)\"}, {\"id\": \"SAL-1938\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"actual\", \"metric\": 0.8201979340660001, \"running_time\": 84355, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"large_kernel_matters\"], \"timestamp\": \"2018-09-22 09:58:11.763000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1938 | 0.8202 | (solution-9 , large_kernel_matters)\"}, {\"id\": \"SAL-1938\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"best\", \"metric\": 0.8459236597917258, \"running_time\": 84355, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"large_kernel_matters\"], \"timestamp\": \"2018-09-22 09:58:11.763000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1938 | 0.8202 | (solution-9 , large_kernel_matters)\"}, {\"id\": \"SAL-1938\", \"resource\": \"running_time_day\", \"time_or_count\": 129.62444444444444, \"actual_or_best\": \"actual\", \"metric\": 0.8201979340660001, \"running_time\": 84355, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"large_kernel_matters\"], \"timestamp\": \"2018-09-22 09:58:11.763000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1938 | 0.8202 | (solution-9 , large_kernel_matters)\"}, {\"id\": \"SAL-1938\", \"resource\": \"running_time_day\", \"time_or_count\": 129.62444444444444, \"actual_or_best\": \"best\", \"metric\": 0.8459236597917258, \"running_time\": 84355, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"large_kernel_matters\"], \"timestamp\": \"2018-09-22 09:58:11.763000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1938 | 0.8202 | (solution-9 , large_kernel_matters)\"}, {\"id\": \"SAL-1951\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"actual\", \"metric\": 0.8167207312259787, \"running_time\": 79249, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"large_kernel_matters\"], \"timestamp\": \"2018-09-22 11:36:43.090000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1951 | 0.8167 | (solution-9 , large_kernel_matters)\"}, {\"id\": \"SAL-1951\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"best\", \"metric\": 0.8459236597917258, \"running_time\": 79249, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"large_kernel_matters\"], \"timestamp\": \"2018-09-22 11:36:43.090000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1951 | 0.8167 | (solution-9 , large_kernel_matters)\"}, {\"id\": \"SAL-1951\", \"resource\": \"running_time_day\", \"time_or_count\": 129.62444444444444, \"actual_or_best\": \"actual\", \"metric\": 0.8167207312259787, \"running_time\": 79249, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"large_kernel_matters\"], \"timestamp\": \"2018-09-22 11:36:43.090000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1951 | 0.8167 | (solution-9 , large_kernel_matters)\"}, {\"id\": \"SAL-1951\", \"resource\": \"running_time_day\", \"time_or_count\": 129.62444444444444, \"actual_or_best\": \"best\", \"metric\": 0.8459236597917258, \"running_time\": 79249, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"large_kernel_matters\"], \"timestamp\": \"2018-09-22 11:36:43.090000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1951 | 0.8167 | (solution-9 , large_kernel_matters)\"}, {\"id\": \"SAL-1977\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"actual\", \"metric\": 0.8487732485108795, \"running_time\": 69434, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"stacking\"], \"timestamp\": \"2018-09-22 15:02:07.193000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1977 | 0.8488 | (solution-9 , stacking)\"}, {\"id\": \"SAL-1977\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"best\", \"metric\": 0.8487732485108795, \"running_time\": 69434, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"stacking\"], \"timestamp\": \"2018-09-22 15:02:07.193000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1977 | 0.8488 | (solution-9 , stacking)\"}, {\"id\": \"SAL-1977\", \"resource\": \"running_time_day\", \"time_or_count\": 129.62444444444444, \"actual_or_best\": \"actual\", \"metric\": 0.8487732485108795, \"running_time\": 69434, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"stacking\"], \"timestamp\": \"2018-09-22 15:02:07.193000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1977 | 0.8488 | (solution-9 , stacking)\"}, {\"id\": \"SAL-1977\", \"resource\": \"running_time_day\", \"time_or_count\": 129.62444444444444, \"actual_or_best\": \"best\", \"metric\": 0.8487732485108795, \"running_time\": 69434, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"stacking\"], \"timestamp\": \"2018-09-22 15:02:07.193000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1977 | 0.8488 | (solution-9 , stacking)\"}, {\"id\": \"SAL-1976\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"actual\", \"metric\": 0.8486477481979731, \"running_time\": 72186, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-9\"], \"timestamp\": \"2018-09-22 15:46:34.940000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1976 | 0.8486 | (stacking , solution-9)\"}, {\"id\": \"SAL-1976\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"best\", \"metric\": 0.8487732485108795, \"running_time\": 72186, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-9\"], \"timestamp\": \"2018-09-22 15:46:34.940000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1976 | 0.8486 | (stacking , solution-9)\"}, {\"id\": \"SAL-1976\", \"resource\": \"running_time_day\", \"time_or_count\": 129.62444444444444, \"actual_or_best\": \"actual\", \"metric\": 0.8486477481979731, \"running_time\": 72186, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-9\"], \"timestamp\": \"2018-09-22 15:46:34.940000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1976 | 0.8486 | (stacking , solution-9)\"}, {\"id\": \"SAL-1976\", \"resource\": \"running_time_day\", \"time_or_count\": 129.62444444444444, \"actual_or_best\": \"best\", \"metric\": 0.8487732485108795, \"running_time\": 72186, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-9\"], \"timestamp\": \"2018-09-22 15:46:34.940000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1976 | 0.8486 | (stacking , solution-9)\"}, {\"id\": \"SAL-1978\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"actual\", \"metric\": 0.8470486978732857, \"running_time\": 72940, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-9\"], \"timestamp\": \"2018-09-22 16:02:10.299000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1978 | 0.8470 | (stacking , solution-9)\"}, {\"id\": \"SAL-1978\", \"resource\": \"experiment_count_day\", \"time_or_count\": 6.0, \"actual_or_best\": \"best\", \"metric\": 0.8487732485108795, \"running_time\": 72940, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-9\"], \"timestamp\": \"2018-09-22 16:02:10.299000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1978 | 0.8470 | (stacking , solution-9)\"}, {\"id\": \"SAL-1978\", \"resource\": \"running_time_day\", \"time_or_count\": 129.62444444444444, \"actual_or_best\": \"actual\", \"metric\": 0.8470486978732857, \"running_time\": 72940, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-9\"], \"timestamp\": \"2018-09-22 16:02:10.299000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1978 | 0.8470 | (stacking , solution-9)\"}, {\"id\": \"SAL-1978\", \"resource\": \"running_time_day\", \"time_or_count\": 129.62444444444444, \"actual_or_best\": \"best\", \"metric\": 0.8487732485108795, \"running_time\": 72940, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-9\"], \"timestamp\": \"2018-09-22 16:02:10.299000+00:00\", \"timestamp_day\": \"2018-09-22\", \"text\": \"SAL-1978 | 0.8470 | (stacking , solution-9)\"}, {\"id\": \"SAL-2015\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"actual\", \"metric\": 0.8471489105297202, \"running_time\": 18721, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"stacking\"], \"timestamp\": \"2018-09-23 03:30:48.386000+00:00\", \"timestamp_day\": \"2018-09-23\", \"text\": \"SAL-2015 | 0.8471 | (solution-9 , stacking)\"}, {\"id\": \"SAL-2015\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"best\", \"metric\": 0.8487732485108795, \"running_time\": 18721, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"stacking\"], \"timestamp\": \"2018-09-23 03:30:48.386000+00:00\", \"timestamp_day\": \"2018-09-23\", \"text\": \"SAL-2015 | 0.8471 | (solution-9 , stacking)\"}, {\"id\": \"SAL-2015\", \"resource\": \"running_time_day\", \"time_or_count\": 13.129722222222222, \"actual_or_best\": \"actual\", \"metric\": 0.8471489105297202, \"running_time\": 18721, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"stacking\"], \"timestamp\": \"2018-09-23 03:30:48.386000+00:00\", \"timestamp_day\": \"2018-09-23\", \"text\": \"SAL-2015 | 0.8471 | (solution-9 , stacking)\"}, {\"id\": \"SAL-2015\", \"resource\": \"running_time_day\", \"time_or_count\": 13.129722222222222, \"actual_or_best\": \"best\", \"metric\": 0.8487732485108795, \"running_time\": 18721, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"stacking\"], \"timestamp\": \"2018-09-23 03:30:48.386000+00:00\", \"timestamp_day\": \"2018-09-23\", \"text\": \"SAL-2015 | 0.8471 | (solution-9 , stacking)\"}, {\"id\": \"SAL-2018\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"actual\", \"metric\": 0.8459990725358041, \"running_time\": 28546, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-9\"], \"timestamp\": \"2018-09-23 16:40:17.937000+00:00\", \"timestamp_day\": \"2018-09-23\", \"text\": \"SAL-2018 | 0.8460 | (stacking , solution-9)\"}, {\"id\": \"SAL-2018\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"best\", \"metric\": 0.8487732485108795, \"running_time\": 28546, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-9\"], \"timestamp\": \"2018-09-23 16:40:17.937000+00:00\", \"timestamp_day\": \"2018-09-23\", \"text\": \"SAL-2018 | 0.8460 | (stacking , solution-9)\"}, {\"id\": \"SAL-2018\", \"resource\": \"running_time_day\", \"time_or_count\": 13.129722222222222, \"actual_or_best\": \"actual\", \"metric\": 0.8459990725358041, \"running_time\": 28546, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-9\"], \"timestamp\": \"2018-09-23 16:40:17.937000+00:00\", \"timestamp_day\": \"2018-09-23\", \"text\": \"SAL-2018 | 0.8460 | (stacking , solution-9)\"}, {\"id\": \"SAL-2018\", \"resource\": \"running_time_day\", \"time_or_count\": 13.129722222222222, \"actual_or_best\": \"best\", \"metric\": 0.8487732485108795, \"running_time\": 28546, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-9\"], \"timestamp\": \"2018-09-23 16:40:17.937000+00:00\", \"timestamp_day\": \"2018-09-23\", \"text\": \"SAL-2018 | 0.8460 | (stacking , solution-9)\"}, {\"id\": \"SAL-2009\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"actual\", \"metric\": 0.8296252399325863, \"running_time\": 101619, \"owner\": \"neyo\", \"tags\": [\"sharpen\", \"dropout_encoder3\", \"solution-9\"], \"timestamp\": \"2018-09-24 00:18:31.157000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2009 | 0.8296 | (sharpen , dropout_encoder3 , solution-9)\"}, {\"id\": \"SAL-2009\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"best\", \"metric\": 0.8487732485108795, \"running_time\": 101619, \"owner\": \"neyo\", \"tags\": [\"sharpen\", \"dropout_encoder3\", \"solution-9\"], \"timestamp\": \"2018-09-24 00:18:31.157000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2009 | 0.8296 | (sharpen , dropout_encoder3 , solution-9)\"}, {\"id\": \"SAL-2009\", \"resource\": \"running_time_day\", \"time_or_count\": 179.18305555555557, \"actual_or_best\": \"actual\", \"metric\": 0.8296252399325863, \"running_time\": 101619, \"owner\": \"neyo\", \"tags\": [\"sharpen\", \"dropout_encoder3\", \"solution-9\"], \"timestamp\": \"2018-09-24 00:18:31.157000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2009 | 0.8296 | (sharpen , dropout_encoder3 , solution-9)\"}, {\"id\": \"SAL-2009\", \"resource\": \"running_time_day\", \"time_or_count\": 179.18305555555557, \"actual_or_best\": \"best\", \"metric\": 0.8487732485108795, \"running_time\": 101619, \"owner\": \"neyo\", \"tags\": [\"sharpen\", \"dropout_encoder3\", \"solution-9\"], \"timestamp\": \"2018-09-24 00:18:31.157000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2009 | 0.8296 | (sharpen , dropout_encoder3 , solution-9)\"}, {\"id\": \"SAL-2008\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"actual\", \"metric\": 0.8333260922091507, \"running_time\": 110063, \"owner\": \"neyo\", \"tags\": [\"solution-9\", \"sharpen\", \"ratation_60\"], \"timestamp\": \"2018-09-24 02:27:58.584000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2008 | 0.8333 | (solution-9 , sharpen , ratation_60)\"}, {\"id\": \"SAL-2008\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"best\", \"metric\": 0.8487732485108795, \"running_time\": 110063, \"owner\": \"neyo\", \"tags\": [\"solution-9\", \"sharpen\", \"ratation_60\"], \"timestamp\": \"2018-09-24 02:27:58.584000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2008 | 0.8333 | (solution-9 , sharpen , ratation_60)\"}, {\"id\": \"SAL-2008\", \"resource\": \"running_time_day\", \"time_or_count\": 179.18305555555557, \"actual_or_best\": \"actual\", \"metric\": 0.8333260922091507, \"running_time\": 110063, \"owner\": \"neyo\", \"tags\": [\"solution-9\", \"sharpen\", \"ratation_60\"], \"timestamp\": \"2018-09-24 02:27:58.584000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2008 | 0.8333 | (solution-9 , sharpen , ratation_60)\"}, {\"id\": \"SAL-2008\", \"resource\": \"running_time_day\", \"time_or_count\": 179.18305555555557, \"actual_or_best\": \"best\", \"metric\": 0.8487732485108795, \"running_time\": 110063, \"owner\": \"neyo\", \"tags\": [\"solution-9\", \"sharpen\", \"ratation_60\"], \"timestamp\": \"2018-09-24 02:27:58.584000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2008 | 0.8333 | (solution-9 , sharpen , ratation_60)\"}, {\"id\": \"SAL-2017\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"actual\", \"metric\": 0.8240721606163884, \"running_time\": 75059, \"owner\": \"czakon\", \"tags\": [\"large_kernel_matters\", \"solution-9\"], \"timestamp\": \"2018-09-24 05:29:55.122000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2017 | 0.8241 | (large_kernel_matters , solution-9)\"}, {\"id\": \"SAL-2017\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"best\", \"metric\": 0.8487732485108795, \"running_time\": 75059, \"owner\": \"czakon\", \"tags\": [\"large_kernel_matters\", \"solution-9\"], \"timestamp\": \"2018-09-24 05:29:55.122000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2017 | 0.8241 | (large_kernel_matters , solution-9)\"}, {\"id\": \"SAL-2017\", \"resource\": \"running_time_day\", \"time_or_count\": 179.18305555555557, \"actual_or_best\": \"actual\", \"metric\": 0.8240721606163884, \"running_time\": 75059, \"owner\": \"czakon\", \"tags\": [\"large_kernel_matters\", \"solution-9\"], \"timestamp\": \"2018-09-24 05:29:55.122000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2017 | 0.8241 | (large_kernel_matters , solution-9)\"}, {\"id\": \"SAL-2017\", \"resource\": \"running_time_day\", \"time_or_count\": 179.18305555555557, \"actual_or_best\": \"best\", \"metric\": 0.8487732485108795, \"running_time\": 75059, \"owner\": \"czakon\", \"tags\": [\"large_kernel_matters\", \"solution-9\"], \"timestamp\": \"2018-09-24 05:29:55.122000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2017 | 0.8241 | (large_kernel_matters , solution-9)\"}, {\"id\": \"SAL-2003\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"actual\", \"metric\": 0.8337730158944551, \"running_time\": 129369, \"owner\": \"neyo\", \"tags\": [\"without_blur\", \"solution-9\", \"sharpen\"], \"timestamp\": \"2018-09-24 07:35:18.859000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2003 | 0.8338 | (without_blur , solution-9 , sharpen)\"}, {\"id\": \"SAL-2003\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"best\", \"metric\": 0.8487732485108795, \"running_time\": 129369, \"owner\": \"neyo\", \"tags\": [\"without_blur\", \"solution-9\", \"sharpen\"], \"timestamp\": \"2018-09-24 07:35:18.859000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2003 | 0.8338 | (without_blur , solution-9 , sharpen)\"}, {\"id\": \"SAL-2003\", \"resource\": \"running_time_day\", \"time_or_count\": 179.18305555555557, \"actual_or_best\": \"actual\", \"metric\": 0.8337730158944551, \"running_time\": 129369, \"owner\": \"neyo\", \"tags\": [\"without_blur\", \"solution-9\", \"sharpen\"], \"timestamp\": \"2018-09-24 07:35:18.859000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2003 | 0.8338 | (without_blur , solution-9 , sharpen)\"}, {\"id\": \"SAL-2003\", \"resource\": \"running_time_day\", \"time_or_count\": 179.18305555555557, \"actual_or_best\": \"best\", \"metric\": 0.8487732485108795, \"running_time\": 129369, \"owner\": \"neyo\", \"tags\": [\"without_blur\", \"solution-9\", \"sharpen\"], \"timestamp\": \"2018-09-24 07:35:18.859000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2003 | 0.8338 | (without_blur , solution-9 , sharpen)\"}, {\"id\": \"SAL-2007\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"actual\", \"metric\": 0.8343980412196305, \"running_time\": 144866, \"owner\": \"neyo\", \"tags\": [\"solution-9\", \"sharpen\", \"emboss\"], \"timestamp\": \"2018-09-24 12:02:55.475000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2007 | 0.8344 | (solution-9 , sharpen , emboss)\"}, {\"id\": \"SAL-2007\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"best\", \"metric\": 0.8487732485108795, \"running_time\": 144866, \"owner\": \"neyo\", \"tags\": [\"solution-9\", \"sharpen\", \"emboss\"], \"timestamp\": \"2018-09-24 12:02:55.475000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2007 | 0.8344 | (solution-9 , sharpen , emboss)\"}, {\"id\": \"SAL-2007\", \"resource\": \"running_time_day\", \"time_or_count\": 179.18305555555557, \"actual_or_best\": \"actual\", \"metric\": 0.8343980412196305, \"running_time\": 144866, \"owner\": \"neyo\", \"tags\": [\"solution-9\", \"sharpen\", \"emboss\"], \"timestamp\": \"2018-09-24 12:02:55.475000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2007 | 0.8344 | (solution-9 , sharpen , emboss)\"}, {\"id\": \"SAL-2007\", \"resource\": \"running_time_day\", \"time_or_count\": 179.18305555555557, \"actual_or_best\": \"best\", \"metric\": 0.8487732485108795, \"running_time\": 144866, \"owner\": \"neyo\", \"tags\": [\"solution-9\", \"sharpen\", \"emboss\"], \"timestamp\": \"2018-09-24 12:02:55.475000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2007 | 0.8344 | (solution-9 , sharpen , emboss)\"}, {\"id\": \"SAL-2036\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"actual\", \"metric\": 0.8502981992487239, \"running_time\": 38846, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"stacking\"], \"timestamp\": \"2018-09-24 21:22:03.094000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2036 | 0.8503 | (solution-9 , stacking)\"}, {\"id\": \"SAL-2036\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 38846, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"stacking\"], \"timestamp\": \"2018-09-24 21:22:03.094000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2036 | 0.8503 | (solution-9 , stacking)\"}, {\"id\": \"SAL-2036\", \"resource\": \"running_time_day\", \"time_or_count\": 179.18305555555557, \"actual_or_best\": \"actual\", \"metric\": 0.8502981992487239, \"running_time\": 38846, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"stacking\"], \"timestamp\": \"2018-09-24 21:22:03.094000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2036 | 0.8503 | (solution-9 , stacking)\"}, {\"id\": \"SAL-2036\", \"resource\": \"running_time_day\", \"time_or_count\": 179.18305555555557, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 38846, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"stacking\"], \"timestamp\": \"2018-09-24 21:22:03.094000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2036 | 0.8503 | (solution-9 , stacking)\"}, {\"id\": \"SAL-2033\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"actual\", \"metric\": 0.8487740739239991, \"running_time\": 45237, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"stacking\"], \"timestamp\": \"2018-09-24 23:04:45.630000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2033 | 0.8488 | (solution-9 , stacking)\"}, {\"id\": \"SAL-2033\", \"resource\": \"experiment_count_day\", \"time_or_count\": 7.0, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 45237, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"stacking\"], \"timestamp\": \"2018-09-24 23:04:45.630000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2033 | 0.8488 | (solution-9 , stacking)\"}, {\"id\": \"SAL-2033\", \"resource\": \"running_time_day\", \"time_or_count\": 179.18305555555557, \"actual_or_best\": \"actual\", \"metric\": 0.8487740739239991, \"running_time\": 45237, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"stacking\"], \"timestamp\": \"2018-09-24 23:04:45.630000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2033 | 0.8488 | (solution-9 , stacking)\"}, {\"id\": \"SAL-2033\", \"resource\": \"running_time_day\", \"time_or_count\": 179.18305555555557, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 45237, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"stacking\"], \"timestamp\": \"2018-09-24 23:04:45.630000+00:00\", \"timestamp_day\": \"2018-09-24\", \"text\": \"SAL-2033 | 0.8488 | (solution-9 , stacking)\"}, {\"id\": \"SAL-2037\", \"resource\": \"experiment_count_day\", \"time_or_count\": 5.0, \"actual_or_best\": \"actual\", \"metric\": 0.8497727862795329, \"running_time\": 49993, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"stacking\"], \"timestamp\": \"2018-09-25 00:38:29.672000+00:00\", \"timestamp_day\": \"2018-09-25\", \"text\": \"SAL-2037 | 0.8498 | (solution-9 , stacking)\"}, {\"id\": \"SAL-2037\", \"resource\": \"experiment_count_day\", \"time_or_count\": 5.0, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 49993, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"stacking\"], \"timestamp\": \"2018-09-25 00:38:29.672000+00:00\", \"timestamp_day\": \"2018-09-25\", \"text\": \"SAL-2037 | 0.8498 | (solution-9 , stacking)\"}, {\"id\": \"SAL-2037\", \"resource\": \"running_time_day\", \"time_or_count\": 98.96888888888888, \"actual_or_best\": \"actual\", \"metric\": 0.8497727862795329, \"running_time\": 49993, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"stacking\"], \"timestamp\": \"2018-09-25 00:38:29.672000+00:00\", \"timestamp_day\": \"2018-09-25\", \"text\": \"SAL-2037 | 0.8498 | (solution-9 , stacking)\"}, {\"id\": \"SAL-2037\", \"resource\": \"running_time_day\", \"time_or_count\": 98.96888888888888, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 49993, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"stacking\"], \"timestamp\": \"2018-09-25 00:38:29.672000+00:00\", \"timestamp_day\": \"2018-09-25\", \"text\": \"SAL-2037 | 0.8498 | (solution-9 , stacking)\"}, {\"id\": \"SAL-2106\", \"resource\": \"experiment_count_day\", \"time_or_count\": 5.0, \"actual_or_best\": \"actual\", \"metric\": 0.4333333333333333, \"running_time\": 912, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"pspnet_dev\"], \"timestamp\": \"2018-09-25 07:53:22.339000+00:00\", \"timestamp_day\": \"2018-09-25\", \"text\": \"SAL-2106 | 0.4333 | (solution-10 , pspnet_dev)\"}, {\"id\": \"SAL-2106\", \"resource\": \"experiment_count_day\", \"time_or_count\": 5.0, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 912, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"pspnet_dev\"], \"timestamp\": \"2018-09-25 07:53:22.339000+00:00\", \"timestamp_day\": \"2018-09-25\", \"text\": \"SAL-2106 | 0.4333 | (solution-10 , pspnet_dev)\"}, {\"id\": \"SAL-2106\", \"resource\": \"running_time_day\", \"time_or_count\": 98.96888888888888, \"actual_or_best\": \"actual\", \"metric\": 0.4333333333333333, \"running_time\": 912, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"pspnet_dev\"], \"timestamp\": \"2018-09-25 07:53:22.339000+00:00\", \"timestamp_day\": \"2018-09-25\", \"text\": \"SAL-2106 | 0.4333 | (solution-10 , pspnet_dev)\"}, {\"id\": \"SAL-2106\", \"resource\": \"running_time_day\", \"time_or_count\": 98.96888888888888, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 912, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"pspnet_dev\"], \"timestamp\": \"2018-09-25 07:53:22.339000+00:00\", \"timestamp_day\": \"2018-09-25\", \"text\": \"SAL-2106 | 0.4333 | (solution-10 , pspnet_dev)\"}, {\"id\": \"SAL-2023\", \"resource\": \"experiment_count_day\", \"time_or_count\": 5.0, \"actual_or_best\": \"actual\", \"metric\": 0.822822335078707, \"running_time\": 132194, \"owner\": \"czakon\", \"tags\": [\"depth\", \"solution-9\", \"unet\"], \"timestamp\": \"2018-09-25 07:58:12.265000+00:00\", \"timestamp_day\": \"2018-09-25\", \"text\": \"SAL-2023 | 0.8228 | (depth , solution-9 , unet)\"}, {\"id\": \"SAL-2023\", \"resource\": \"experiment_count_day\", \"time_or_count\": 5.0, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 132194, \"owner\": \"czakon\", \"tags\": [\"depth\", \"solution-9\", \"unet\"], \"timestamp\": \"2018-09-25 07:58:12.265000+00:00\", \"timestamp_day\": \"2018-09-25\", \"text\": \"SAL-2023 | 0.8228 | (depth , solution-9 , unet)\"}, {\"id\": \"SAL-2023\", \"resource\": \"running_time_day\", \"time_or_count\": 98.96888888888888, \"actual_or_best\": \"actual\", \"metric\": 0.822822335078707, \"running_time\": 132194, \"owner\": \"czakon\", \"tags\": [\"depth\", \"solution-9\", \"unet\"], \"timestamp\": \"2018-09-25 07:58:12.265000+00:00\", \"timestamp_day\": \"2018-09-25\", \"text\": \"SAL-2023 | 0.8228 | (depth , solution-9 , unet)\"}, {\"id\": \"SAL-2023\", \"resource\": \"running_time_day\", \"time_or_count\": 98.96888888888888, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 132194, \"owner\": \"czakon\", \"tags\": [\"depth\", \"solution-9\", \"unet\"], \"timestamp\": \"2018-09-25 07:58:12.265000+00:00\", \"timestamp_day\": \"2018-09-25\", \"text\": \"SAL-2023 | 0.8228 | (depth , solution-9 , unet)\"}, {\"id\": \"SAL-2021\", \"resource\": \"experiment_count_day\", \"time_or_count\": 5.0, \"actual_or_best\": \"actual\", \"metric\": 0.8248730364547456, \"running_time\": 134886, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"unet\"], \"timestamp\": \"2018-09-25 08:40:13.649000+00:00\", \"timestamp_day\": \"2018-09-25\", \"text\": \"SAL-2021 | 0.8249 | (solution-9 , unet)\"}, {\"id\": \"SAL-2021\", \"resource\": \"experiment_count_day\", \"time_or_count\": 5.0, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 134886, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"unet\"], \"timestamp\": \"2018-09-25 08:40:13.649000+00:00\", \"timestamp_day\": \"2018-09-25\", \"text\": \"SAL-2021 | 0.8249 | (solution-9 , unet)\"}, {\"id\": \"SAL-2021\", \"resource\": \"running_time_day\", \"time_or_count\": 98.96888888888888, \"actual_or_best\": \"actual\", \"metric\": 0.8248730364547456, \"running_time\": 134886, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"unet\"], \"timestamp\": \"2018-09-25 08:40:13.649000+00:00\", \"timestamp_day\": \"2018-09-25\", \"text\": \"SAL-2021 | 0.8249 | (solution-9 , unet)\"}, {\"id\": \"SAL-2021\", \"resource\": \"running_time_day\", \"time_or_count\": 98.96888888888888, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 134886, \"owner\": \"czakon\", \"tags\": [\"solution-9\", \"unet\"], \"timestamp\": \"2018-09-25 08:40:13.649000+00:00\", \"timestamp_day\": \"2018-09-25\", \"text\": \"SAL-2021 | 0.8249 | (solution-9 , unet)\"}, {\"id\": \"SAL-2108\", \"resource\": \"experiment_count_day\", \"time_or_count\": 5.0, \"actual_or_best\": \"actual\", \"metric\": 0.848998248623436, \"running_time\": 38303, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-10\"], \"timestamp\": \"2018-09-25 18:45:14.062000+00:00\", \"timestamp_day\": \"2018-09-25\", \"text\": \"SAL-2108 | 0.8490 | (stacking , solution-10)\"}, {\"id\": \"SAL-2108\", \"resource\": \"experiment_count_day\", \"time_or_count\": 5.0, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 38303, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-10\"], \"timestamp\": \"2018-09-25 18:45:14.062000+00:00\", \"timestamp_day\": \"2018-09-25\", \"text\": \"SAL-2108 | 0.8490 | (stacking , solution-10)\"}, {\"id\": \"SAL-2108\", \"resource\": \"running_time_day\", \"time_or_count\": 98.96888888888888, \"actual_or_best\": \"actual\", \"metric\": 0.848998248623436, \"running_time\": 38303, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-10\"], \"timestamp\": \"2018-09-25 18:45:14.062000+00:00\", \"timestamp_day\": \"2018-09-25\", \"text\": \"SAL-2108 | 0.8490 | (stacking , solution-10)\"}, {\"id\": \"SAL-2108\", \"resource\": \"running_time_day\", \"time_or_count\": 98.96888888888888, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 38303, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-10\"], \"timestamp\": \"2018-09-25 18:45:14.062000+00:00\", \"timestamp_day\": \"2018-09-25\", \"text\": \"SAL-2108 | 0.8490 | (stacking , solution-10)\"}, {\"id\": \"SAL-2140\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"actual\", \"metric\": 0.8500466358412386, \"running_time\": 29777, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-09-26 15:19:49.505000+00:00\", \"timestamp_day\": \"2018-09-26\", \"text\": \"SAL-2140 | 0.8500 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2140\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 29777, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-09-26 15:19:49.505000+00:00\", \"timestamp_day\": \"2018-09-26\", \"text\": \"SAL-2140 | 0.8500 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2140\", \"resource\": \"running_time_day\", \"time_or_count\": 23.531944444444445, \"actual_or_best\": \"actual\", \"metric\": 0.8500466358412386, \"running_time\": 29777, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-09-26 15:19:49.505000+00:00\", \"timestamp_day\": \"2018-09-26\", \"text\": \"SAL-2140 | 0.8500 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2140\", \"resource\": \"running_time_day\", \"time_or_count\": 23.531944444444445, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 29777, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-09-26 15:19:49.505000+00:00\", \"timestamp_day\": \"2018-09-26\", \"text\": \"SAL-2140 | 0.8500 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2148\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"actual\", \"metric\": 0.8493985739862802, \"running_time\": 31536, \"owner\": \"czakon\", \"tags\": [\"loss_design\", \"solution-10\", \"stacking\"], \"timestamp\": \"2018-09-26 19:30:25.869000+00:00\", \"timestamp_day\": \"2018-09-26\", \"text\": \"SAL-2148 | 0.8494 | (loss_design , solution-10 , stacking)\"}, {\"id\": \"SAL-2148\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 31536, \"owner\": \"czakon\", \"tags\": [\"loss_design\", \"solution-10\", \"stacking\"], \"timestamp\": \"2018-09-26 19:30:25.869000+00:00\", \"timestamp_day\": \"2018-09-26\", \"text\": \"SAL-2148 | 0.8494 | (loss_design , solution-10 , stacking)\"}, {\"id\": \"SAL-2148\", \"resource\": \"running_time_day\", \"time_or_count\": 23.531944444444445, \"actual_or_best\": \"actual\", \"metric\": 0.8493985739862802, \"running_time\": 31536, \"owner\": \"czakon\", \"tags\": [\"loss_design\", \"solution-10\", \"stacking\"], \"timestamp\": \"2018-09-26 19:30:25.869000+00:00\", \"timestamp_day\": \"2018-09-26\", \"text\": \"SAL-2148 | 0.8494 | (loss_design , solution-10 , stacking)\"}, {\"id\": \"SAL-2148\", \"resource\": \"running_time_day\", \"time_or_count\": 23.531944444444445, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 31536, \"owner\": \"czakon\", \"tags\": [\"loss_design\", \"solution-10\", \"stacking\"], \"timestamp\": \"2018-09-26 19:30:25.869000+00:00\", \"timestamp_day\": \"2018-09-26\", \"text\": \"SAL-2148 | 0.8494 | (loss_design , solution-10 , stacking)\"}, {\"id\": \"SAL-2151\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"actual\", \"metric\": 0.8499980865423145, \"running_time\": 23402, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-10\"], \"timestamp\": \"2018-09-26 22:31:42.783000+00:00\", \"timestamp_day\": \"2018-09-26\", \"text\": \"SAL-2151 | 0.8500 | (stacking , solution-10)\"}, {\"id\": \"SAL-2151\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 23402, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-10\"], \"timestamp\": \"2018-09-26 22:31:42.783000+00:00\", \"timestamp_day\": \"2018-09-26\", \"text\": \"SAL-2151 | 0.8500 | (stacking , solution-10)\"}, {\"id\": \"SAL-2151\", \"resource\": \"running_time_day\", \"time_or_count\": 23.531944444444445, \"actual_or_best\": \"actual\", \"metric\": 0.8499980865423145, \"running_time\": 23402, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-10\"], \"timestamp\": \"2018-09-26 22:31:42.783000+00:00\", \"timestamp_day\": \"2018-09-26\", \"text\": \"SAL-2151 | 0.8500 | (stacking , solution-10)\"}, {\"id\": \"SAL-2151\", \"resource\": \"running_time_day\", \"time_or_count\": 23.531944444444445, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 23402, \"owner\": \"czakon\", \"tags\": [\"stacking\", \"solution-10\"], \"timestamp\": \"2018-09-26 22:31:42.783000+00:00\", \"timestamp_day\": \"2018-09-26\", \"text\": \"SAL-2151 | 0.8500 | (stacking , solution-10)\"}, {\"id\": \"SAL-2103\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"actual\", \"metric\": 0.8363250056653356, \"running_time\": 160986, \"owner\": \"neyo\", \"tags\": [\"sharpen\", \"solution-9\", \"without_blur\", \"emboss\"], \"timestamp\": \"2018-09-27 03:29:46.030000+00:00\", \"timestamp_day\": \"2018-09-27\", \"text\": \"SAL-2103 | 0.8363 | (sharpen , solution-9 , without_blur , emboss)\"}, {\"id\": \"SAL-2103\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 160986, \"owner\": \"neyo\", \"tags\": [\"sharpen\", \"solution-9\", \"without_blur\", \"emboss\"], \"timestamp\": \"2018-09-27 03:29:46.030000+00:00\", \"timestamp_day\": \"2018-09-27\", \"text\": \"SAL-2103 | 0.8363 | (sharpen , solution-9 , without_blur , emboss)\"}, {\"id\": \"SAL-2103\", \"resource\": \"running_time_day\", \"time_or_count\": 61.547777777777775, \"actual_or_best\": \"actual\", \"metric\": 0.8363250056653356, \"running_time\": 160986, \"owner\": \"neyo\", \"tags\": [\"sharpen\", \"solution-9\", \"without_blur\", \"emboss\"], \"timestamp\": \"2018-09-27 03:29:46.030000+00:00\", \"timestamp_day\": \"2018-09-27\", \"text\": \"SAL-2103 | 0.8363 | (sharpen , solution-9 , without_blur , emboss)\"}, {\"id\": \"SAL-2103\", \"resource\": \"running_time_day\", \"time_or_count\": 61.547777777777775, \"actual_or_best\": \"best\", \"metric\": 0.8502981992487239, \"running_time\": 160986, \"owner\": \"neyo\", \"tags\": [\"sharpen\", \"solution-9\", \"without_blur\", \"emboss\"], \"timestamp\": \"2018-09-27 03:29:46.030000+00:00\", \"timestamp_day\": \"2018-09-27\", \"text\": \"SAL-2103 | 0.8363 | (sharpen , solution-9 , without_blur , emboss)\"}, {\"id\": \"SAL-2173\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"actual\", \"metric\": 0.8508485246866058, \"running_time\": 26314, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-09-27 16:54:25.091000+00:00\", \"timestamp_day\": \"2018-09-27\", \"text\": \"SAL-2173 | 0.8508 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2173\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 26314, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-09-27 16:54:25.091000+00:00\", \"timestamp_day\": \"2018-09-27\", \"text\": \"SAL-2173 | 0.8508 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2173\", \"resource\": \"running_time_day\", \"time_or_count\": 61.547777777777775, \"actual_or_best\": \"actual\", \"metric\": 0.8508485246866058, \"running_time\": 26314, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-09-27 16:54:25.091000+00:00\", \"timestamp_day\": \"2018-09-27\", \"text\": \"SAL-2173 | 0.8508 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2173\", \"resource\": \"running_time_day\", \"time_or_count\": 61.547777777777775, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 26314, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-09-27 16:54:25.091000+00:00\", \"timestamp_day\": \"2018-09-27\", \"text\": \"SAL-2173 | 0.8508 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2163\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"actual\", \"metric\": 0.8507485746616181, \"running_time\": 34272, \"owner\": \"czakon\", \"tags\": [\"raw\", \"stacking\", \"solution-10\"], \"timestamp\": \"2018-09-27 17:13:21.740000+00:00\", \"timestamp_day\": \"2018-09-27\", \"text\": \"SAL-2163 | 0.8507 | (raw , stacking , solution-10)\"}, {\"id\": \"SAL-2163\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 34272, \"owner\": \"czakon\", \"tags\": [\"raw\", \"stacking\", \"solution-10\"], \"timestamp\": \"2018-09-27 17:13:21.740000+00:00\", \"timestamp_day\": \"2018-09-27\", \"text\": \"SAL-2163 | 0.8507 | (raw , stacking , solution-10)\"}, {\"id\": \"SAL-2163\", \"resource\": \"running_time_day\", \"time_or_count\": 61.547777777777775, \"actual_or_best\": \"actual\", \"metric\": 0.8507485746616181, \"running_time\": 34272, \"owner\": \"czakon\", \"tags\": [\"raw\", \"stacking\", \"solution-10\"], \"timestamp\": \"2018-09-27 17:13:21.740000+00:00\", \"timestamp_day\": \"2018-09-27\", \"text\": \"SAL-2163 | 0.8507 | (raw , stacking , solution-10)\"}, {\"id\": \"SAL-2163\", \"resource\": \"running_time_day\", \"time_or_count\": 61.547777777777775, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 34272, \"owner\": \"czakon\", \"tags\": [\"raw\", \"stacking\", \"solution-10\"], \"timestamp\": \"2018-09-27 17:13:21.740000+00:00\", \"timestamp_day\": \"2018-09-27\", \"text\": \"SAL-2163 | 0.8507 | (raw , stacking , solution-10)\"}, {\"id\": \"SAL-2171\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"actual\", \"metric\": 0.8478487107797452, \"running_time\": 88126, \"owner\": \"czakon\", \"tags\": [\"raw\", \"stacking\", \"solution-10\", \"rnn\"], \"timestamp\": \"2018-09-28 09:25:04.563000+00:00\", \"timestamp_day\": \"2018-09-28\", \"text\": \"SAL-2171 | 0.8478 | (raw , stacking , solution-10 , rnn)\"}, {\"id\": \"SAL-2171\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 88126, \"owner\": \"czakon\", \"tags\": [\"raw\", \"stacking\", \"solution-10\", \"rnn\"], \"timestamp\": \"2018-09-28 09:25:04.563000+00:00\", \"timestamp_day\": \"2018-09-28\", \"text\": \"SAL-2171 | 0.8478 | (raw , stacking , solution-10 , rnn)\"}, {\"id\": \"SAL-2171\", \"resource\": \"running_time_day\", \"time_or_count\": 24.479444444444443, \"actual_or_best\": \"actual\", \"metric\": 0.8478487107797452, \"running_time\": 88126, \"owner\": \"czakon\", \"tags\": [\"raw\", \"stacking\", \"solution-10\", \"rnn\"], \"timestamp\": \"2018-09-28 09:25:04.563000+00:00\", \"timestamp_day\": \"2018-09-28\", \"text\": \"SAL-2171 | 0.8478 | (raw , stacking , solution-10 , rnn)\"}, {\"id\": \"SAL-2171\", \"resource\": \"running_time_day\", \"time_or_count\": 24.479444444444443, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 88126, \"owner\": \"czakon\", \"tags\": [\"raw\", \"stacking\", \"solution-10\", \"rnn\"], \"timestamp\": \"2018-09-28 09:25:04.563000+00:00\", \"timestamp_day\": \"2018-09-28\", \"text\": \"SAL-2171 | 0.8478 | (raw , stacking , solution-10 , rnn)\"}, {\"id\": \"SAL-2194\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"actual\", \"metric\": 0.819424634529582, \"running_time\": 144141, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"seresnetxt\"], \"timestamp\": \"2018-09-29 09:06:43.026000+00:00\", \"timestamp_day\": \"2018-09-29\", \"text\": \"SAL-2194 | 0.8194 | (solution-10 , seresnetxt)\"}, {\"id\": \"SAL-2194\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 144141, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"seresnetxt\"], \"timestamp\": \"2018-09-29 09:06:43.026000+00:00\", \"timestamp_day\": \"2018-09-29\", \"text\": \"SAL-2194 | 0.8194 | (solution-10 , seresnetxt)\"}, {\"id\": \"SAL-2194\", \"resource\": \"running_time_day\", \"time_or_count\": 80.61527777777778, \"actual_or_best\": \"actual\", \"metric\": 0.819424634529582, \"running_time\": 144141, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"seresnetxt\"], \"timestamp\": \"2018-09-29 09:06:43.026000+00:00\", \"timestamp_day\": \"2018-09-29\", \"text\": \"SAL-2194 | 0.8194 | (solution-10 , seresnetxt)\"}, {\"id\": \"SAL-2194\", \"resource\": \"running_time_day\", \"time_or_count\": 80.61527777777778, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 144141, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"seresnetxt\"], \"timestamp\": \"2018-09-29 09:06:43.026000+00:00\", \"timestamp_day\": \"2018-09-29\", \"text\": \"SAL-2194 | 0.8194 | (solution-10 , seresnetxt)\"}, {\"id\": \"SAL-2204\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"actual\", \"metric\": 0.8250498624561593, \"running_time\": 146074, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"densenet\"], \"timestamp\": \"2018-09-29 11:11:27.289000+00:00\", \"timestamp_day\": \"2018-09-29\", \"text\": \"SAL-2204 | 0.8250 | (solution-10 , densenet)\"}, {\"id\": \"SAL-2204\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 146074, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"densenet\"], \"timestamp\": \"2018-09-29 11:11:27.289000+00:00\", \"timestamp_day\": \"2018-09-29\", \"text\": \"SAL-2204 | 0.8250 | (solution-10 , densenet)\"}, {\"id\": \"SAL-2204\", \"resource\": \"running_time_day\", \"time_or_count\": 80.61527777777778, \"actual_or_best\": \"actual\", \"metric\": 0.8250498624561593, \"running_time\": 146074, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"densenet\"], \"timestamp\": \"2018-09-29 11:11:27.289000+00:00\", \"timestamp_day\": \"2018-09-29\", \"text\": \"SAL-2204 | 0.8250 | (solution-10 , densenet)\"}, {\"id\": \"SAL-2204\", \"resource\": \"running_time_day\", \"time_or_count\": 80.61527777777778, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 146074, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"densenet\"], \"timestamp\": \"2018-09-29 11:11:27.289000+00:00\", \"timestamp_day\": \"2018-09-29\", \"text\": \"SAL-2204 | 0.8250 | (solution-10 , densenet)\"}, {\"id\": \"SAL-2208\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"actual\", \"metric\": 0.8314745780263021, \"running_time\": 119904, \"owner\": \"neyo\", \"tags\": [\"augmentations\", \"solution-10\"], \"timestamp\": \"2018-09-30 00:05:42.860000+00:00\", \"timestamp_day\": \"2018-09-30\", \"text\": \"SAL-2208 | 0.8315 | (augmentations , solution-10)\"}, {\"id\": \"SAL-2208\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 119904, \"owner\": \"neyo\", \"tags\": [\"augmentations\", \"solution-10\"], \"timestamp\": \"2018-09-30 00:05:42.860000+00:00\", \"timestamp_day\": \"2018-09-30\", \"text\": \"SAL-2208 | 0.8315 | (augmentations , solution-10)\"}, {\"id\": \"SAL-2208\", \"resource\": \"running_time_day\", \"time_or_count\": 45.308055555555555, \"actual_or_best\": \"actual\", \"metric\": 0.8314745780263021, \"running_time\": 119904, \"owner\": \"neyo\", \"tags\": [\"augmentations\", \"solution-10\"], \"timestamp\": \"2018-09-30 00:05:42.860000+00:00\", \"timestamp_day\": \"2018-09-30\", \"text\": \"SAL-2208 | 0.8315 | (augmentations , solution-10)\"}, {\"id\": \"SAL-2208\", \"resource\": \"running_time_day\", \"time_or_count\": 45.308055555555555, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 119904, \"owner\": \"neyo\", \"tags\": [\"augmentations\", \"solution-10\"], \"timestamp\": \"2018-09-30 00:05:42.860000+00:00\", \"timestamp_day\": \"2018-09-30\", \"text\": \"SAL-2208 | 0.8315 | (augmentations , solution-10)\"}, {\"id\": \"SAL-2212\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"actual\", \"metric\": 0.850273211742477, \"running_time\": 43205, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-09-30 18:11:25.921000+00:00\", \"timestamp_day\": \"2018-09-30\", \"text\": \"SAL-2212 | 0.8503 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2212\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 43205, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-09-30 18:11:25.921000+00:00\", \"timestamp_day\": \"2018-09-30\", \"text\": \"SAL-2212 | 0.8503 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2212\", \"resource\": \"running_time_day\", \"time_or_count\": 45.308055555555555, \"actual_or_best\": \"actual\", \"metric\": 0.850273211742477, \"running_time\": 43205, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-09-30 18:11:25.921000+00:00\", \"timestamp_day\": \"2018-09-30\", \"text\": \"SAL-2212 | 0.8503 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2212\", \"resource\": \"running_time_day\", \"time_or_count\": 45.308055555555555, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 43205, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-09-30 18:11:25.921000+00:00\", \"timestamp_day\": \"2018-09-30\", \"text\": \"SAL-2212 | 0.8503 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2218\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"actual\", \"metric\": 0.8064492528510518, \"running_time\": 113818, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"bce\"], \"timestamp\": \"2018-10-01 20:11:43.926000+00:00\", \"timestamp_day\": \"2018-10-01\", \"text\": \"SAL-2218 | 0.8064 | (solution-10 , bce)\"}, {\"id\": \"SAL-2218\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 113818, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"bce\"], \"timestamp\": \"2018-10-01 20:11:43.926000+00:00\", \"timestamp_day\": \"2018-10-01\", \"text\": \"SAL-2218 | 0.8064 | (solution-10 , bce)\"}, {\"id\": \"SAL-2218\", \"resource\": \"running_time_day\", \"time_or_count\": 31.61611111111111, \"actual_or_best\": \"actual\", \"metric\": 0.8064492528510518, \"running_time\": 113818, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"bce\"], \"timestamp\": \"2018-10-01 20:11:43.926000+00:00\", \"timestamp_day\": \"2018-10-01\", \"text\": \"SAL-2218 | 0.8064 | (solution-10 , bce)\"}, {\"id\": \"SAL-2218\", \"resource\": \"running_time_day\", \"time_or_count\": 31.61611111111111, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 113818, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"bce\"], \"timestamp\": \"2018-10-01 20:11:43.926000+00:00\", \"timestamp_day\": \"2018-10-01\", \"text\": \"SAL-2218 | 0.8064 | (solution-10 , bce)\"}, {\"id\": \"SAL-2220\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"actual\", \"metric\": 0.8325744410077244, \"running_time\": 144279, \"owner\": \"neyo\", \"tags\": [\"augmentations\", \"solution-10\", \"depth\"], \"timestamp\": \"2018-10-02 05:17:17.359000+00:00\", \"timestamp_day\": \"2018-10-02\", \"text\": \"SAL-2220 | 0.8326 | (augmentations , solution-10 , depth)\"}, {\"id\": \"SAL-2220\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 144279, \"owner\": \"neyo\", \"tags\": [\"augmentations\", \"solution-10\", \"depth\"], \"timestamp\": \"2018-10-02 05:17:17.359000+00:00\", \"timestamp_day\": \"2018-10-02\", \"text\": \"SAL-2220 | 0.8326 | (augmentations , solution-10 , depth)\"}, {\"id\": \"SAL-2220\", \"resource\": \"running_time_day\", \"time_or_count\": 47.17305555555556, \"actual_or_best\": \"actual\", \"metric\": 0.8325744410077244, \"running_time\": 144279, \"owner\": \"neyo\", \"tags\": [\"augmentations\", \"solution-10\", \"depth\"], \"timestamp\": \"2018-10-02 05:17:17.359000+00:00\", \"timestamp_day\": \"2018-10-02\", \"text\": \"SAL-2220 | 0.8326 | (augmentations , solution-10 , depth)\"}, {\"id\": \"SAL-2220\", \"resource\": \"running_time_day\", \"time_or_count\": 47.17305555555556, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 144279, \"owner\": \"neyo\", \"tags\": [\"augmentations\", \"solution-10\", \"depth\"], \"timestamp\": \"2018-10-02 05:17:17.359000+00:00\", \"timestamp_day\": \"2018-10-02\", \"text\": \"SAL-2220 | 0.8326 | (augmentations , solution-10 , depth)\"}, {\"id\": \"SAL-2227\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"actual\", \"metric\": 0.8286485510998256, \"running_time\": 25544, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"higher_momentum\"], \"timestamp\": \"2018-10-02 17:47:52.378000+00:00\", \"timestamp_day\": \"2018-10-02\", \"text\": \"SAL-2227 | 0.8286 | (solution-10 , higher_momentum)\"}, {\"id\": \"SAL-2227\", \"resource\": \"experiment_count_day\", \"time_or_count\": 2.0, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 25544, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"higher_momentum\"], \"timestamp\": \"2018-10-02 17:47:52.378000+00:00\", \"timestamp_day\": \"2018-10-02\", \"text\": \"SAL-2227 | 0.8286 | (solution-10 , higher_momentum)\"}, {\"id\": \"SAL-2227\", \"resource\": \"running_time_day\", \"time_or_count\": 47.17305555555556, \"actual_or_best\": \"actual\", \"metric\": 0.8286485510998256, \"running_time\": 25544, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"higher_momentum\"], \"timestamp\": \"2018-10-02 17:47:52.378000+00:00\", \"timestamp_day\": \"2018-10-02\", \"text\": \"SAL-2227 | 0.8286 | (solution-10 , higher_momentum)\"}, {\"id\": \"SAL-2227\", \"resource\": \"running_time_day\", \"time_or_count\": 47.17305555555556, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 25544, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"higher_momentum\"], \"timestamp\": \"2018-10-02 17:47:52.378000+00:00\", \"timestamp_day\": \"2018-10-02\", \"text\": \"SAL-2227 | 0.8286 | (solution-10 , higher_momentum)\"}, {\"id\": \"SAL-2226\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"actual\", \"metric\": 0.8292998020509265, \"running_time\": 137678, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"lower_lr\"], \"timestamp\": \"2018-10-03 00:20:20.521000+00:00\", \"timestamp_day\": \"2018-10-03\", \"text\": \"SAL-2226 | 0.8293 | (solution-10 , lower_lr)\"}, {\"id\": \"SAL-2226\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 137678, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"lower_lr\"], \"timestamp\": \"2018-10-03 00:20:20.521000+00:00\", \"timestamp_day\": \"2018-10-03\", \"text\": \"SAL-2226 | 0.8293 | (solution-10 , lower_lr)\"}, {\"id\": \"SAL-2226\", \"resource\": \"running_time_day\", \"time_or_count\": 38.24388888888889, \"actual_or_best\": \"actual\", \"metric\": 0.8292998020509265, \"running_time\": 137678, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"lower_lr\"], \"timestamp\": \"2018-10-03 00:20:20.521000+00:00\", \"timestamp_day\": \"2018-10-03\", \"text\": \"SAL-2226 | 0.8293 | (solution-10 , lower_lr)\"}, {\"id\": \"SAL-2226\", \"resource\": \"running_time_day\", \"time_or_count\": 38.24388888888889, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 137678, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"lower_lr\"], \"timestamp\": \"2018-10-03 00:20:20.521000+00:00\", \"timestamp_day\": \"2018-10-03\", \"text\": \"SAL-2226 | 0.8293 | (solution-10 , lower_lr)\"}, {\"id\": \"SAL-2260\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"actual\", \"metric\": 0.8336507046776912, \"running_time\": 130646, \"owner\": \"neyo\", \"tags\": [\"transpose_convolution\", \"solution-10\", \"hypercolumn\"], \"timestamp\": \"2018-10-04 20:54:32.223000+00:00\", \"timestamp_day\": \"2018-10-04\", \"text\": \"SAL-2260 | 0.8337 | (transpose_convolution , solution-10 , hypercolumn)\"}, {\"id\": \"SAL-2260\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 130646, \"owner\": \"neyo\", \"tags\": [\"transpose_convolution\", \"solution-10\", \"hypercolumn\"], \"timestamp\": \"2018-10-04 20:54:32.223000+00:00\", \"timestamp_day\": \"2018-10-04\", \"text\": \"SAL-2260 | 0.8337 | (transpose_convolution , solution-10 , hypercolumn)\"}, {\"id\": \"SAL-2260\", \"resource\": \"running_time_day\", \"time_or_count\": 36.29055555555556, \"actual_or_best\": \"actual\", \"metric\": 0.8336507046776912, \"running_time\": 130646, \"owner\": \"neyo\", \"tags\": [\"transpose_convolution\", \"solution-10\", \"hypercolumn\"], \"timestamp\": \"2018-10-04 20:54:32.223000+00:00\", \"timestamp_day\": \"2018-10-04\", \"text\": \"SAL-2260 | 0.8337 | (transpose_convolution , solution-10 , hypercolumn)\"}, {\"id\": \"SAL-2260\", \"resource\": \"running_time_day\", \"time_or_count\": 36.29055555555556, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 130646, \"owner\": \"neyo\", \"tags\": [\"transpose_convolution\", \"solution-10\", \"hypercolumn\"], \"timestamp\": \"2018-10-04 20:54:32.223000+00:00\", \"timestamp_day\": \"2018-10-04\", \"text\": \"SAL-2260 | 0.8337 | (transpose_convolution , solution-10 , hypercolumn)\"}, {\"id\": \"SAL-2266\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"actual\", \"metric\": 0.8135721553637595, \"running_time\": 55344, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"bce\"], \"timestamp\": \"2018-10-05 01:48:29.926000+00:00\", \"timestamp_day\": \"2018-10-05\", \"text\": \"SAL-2266 | 0.8136 | (solution-10 , bce)\"}, {\"id\": \"SAL-2266\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 55344, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"bce\"], \"timestamp\": \"2018-10-05 01:48:29.926000+00:00\", \"timestamp_day\": \"2018-10-05\", \"text\": \"SAL-2266 | 0.8136 | (solution-10 , bce)\"}, {\"id\": \"SAL-2266\", \"resource\": \"running_time_day\", \"time_or_count\": 77.77972222222222, \"actual_or_best\": \"actual\", \"metric\": 0.8135721553637595, \"running_time\": 55344, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"bce\"], \"timestamp\": \"2018-10-05 01:48:29.926000+00:00\", \"timestamp_day\": \"2018-10-05\", \"text\": \"SAL-2266 | 0.8136 | (solution-10 , bce)\"}, {\"id\": \"SAL-2266\", \"resource\": \"running_time_day\", \"time_or_count\": 77.77972222222222, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 55344, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"bce\"], \"timestamp\": \"2018-10-05 01:48:29.926000+00:00\", \"timestamp_day\": \"2018-10-05\", \"text\": \"SAL-2266 | 0.8136 | (solution-10 , bce)\"}, {\"id\": \"SAL-2282\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"actual\", \"metric\": 0.84924741082912, \"running_time\": 26093, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-10-05 03:01:42.255000+00:00\", \"timestamp_day\": \"2018-10-05\", \"text\": \"SAL-2282 | 0.8492 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2282\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 26093, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-10-05 03:01:42.255000+00:00\", \"timestamp_day\": \"2018-10-05\", \"text\": \"SAL-2282 | 0.8492 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2282\", \"resource\": \"running_time_day\", \"time_or_count\": 77.77972222222222, \"actual_or_best\": \"actual\", \"metric\": 0.84924741082912, \"running_time\": 26093, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-10-05 03:01:42.255000+00:00\", \"timestamp_day\": \"2018-10-05\", \"text\": \"SAL-2282 | 0.8492 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2282\", \"resource\": \"running_time_day\", \"time_or_count\": 77.77972222222222, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 26093, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-10-05 03:01:42.255000+00:00\", \"timestamp_day\": \"2018-10-05\", \"text\": \"SAL-2282 | 0.8492 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2277\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"actual\", \"metric\": 0.8497981489735614, \"running_time\": 65232, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-10-05 13:45:09.440000+00:00\", \"timestamp_day\": \"2018-10-05\", \"text\": \"SAL-2277 | 0.8498 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2277\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 65232, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-10-05 13:45:09.440000+00:00\", \"timestamp_day\": \"2018-10-05\", \"text\": \"SAL-2277 | 0.8498 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2277\", \"resource\": \"running_time_day\", \"time_or_count\": 77.77972222222222, \"actual_or_best\": \"actual\", \"metric\": 0.8497981489735614, \"running_time\": 65232, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-10-05 13:45:09.440000+00:00\", \"timestamp_day\": \"2018-10-05\", \"text\": \"SAL-2277 | 0.8498 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2277\", \"resource\": \"running_time_day\", \"time_or_count\": 77.77972222222222, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 65232, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-10-05 13:45:09.440000+00:00\", \"timestamp_day\": \"2018-10-05\", \"text\": \"SAL-2277 | 0.8498 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2263\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"actual\", \"metric\": 0.8339000000000001, \"running_time\": 133338, \"owner\": \"neyo\", \"tags\": [\"10_fold\", \"solution-10\"], \"timestamp\": \"2018-10-05 19:12:15.360000+00:00\", \"timestamp_day\": \"2018-10-05\", \"text\": \"SAL-2263 | 0.8339 | (10_fold , solution-10)\"}, {\"id\": \"SAL-2263\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 133338, \"owner\": \"neyo\", \"tags\": [\"10_fold\", \"solution-10\"], \"timestamp\": \"2018-10-05 19:12:15.360000+00:00\", \"timestamp_day\": \"2018-10-05\", \"text\": \"SAL-2263 | 0.8339 | (10_fold , solution-10)\"}, {\"id\": \"SAL-2263\", \"resource\": \"running_time_day\", \"time_or_count\": 77.77972222222222, \"actual_or_best\": \"actual\", \"metric\": 0.8339000000000001, \"running_time\": 133338, \"owner\": \"neyo\", \"tags\": [\"10_fold\", \"solution-10\"], \"timestamp\": \"2018-10-05 19:12:15.360000+00:00\", \"timestamp_day\": \"2018-10-05\", \"text\": \"SAL-2263 | 0.8339 | (10_fold , solution-10)\"}, {\"id\": \"SAL-2263\", \"resource\": \"running_time_day\", \"time_or_count\": 77.77972222222222, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 133338, \"owner\": \"neyo\", \"tags\": [\"10_fold\", \"solution-10\"], \"timestamp\": \"2018-10-05 19:12:15.360000+00:00\", \"timestamp_day\": \"2018-10-05\", \"text\": \"SAL-2263 | 0.8339 | (10_fold , solution-10)\"}, {\"id\": \"SAL-2283\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"actual\", \"metric\": 0.834324391858125, \"running_time\": 126084, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"transpose_convolution\", \"hypercolumn\"], \"timestamp\": \"2018-10-06 07:20:01.508000+00:00\", \"timestamp_day\": \"2018-10-06\", \"text\": \"SAL-2283 | 0.8343 | (solution-10 , transpose_convolution , hypercolumn)\"}, {\"id\": \"SAL-2283\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 126084, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"transpose_convolution\", \"hypercolumn\"], \"timestamp\": \"2018-10-06 07:20:01.508000+00:00\", \"timestamp_day\": \"2018-10-06\", \"text\": \"SAL-2283 | 0.8343 | (solution-10 , transpose_convolution , hypercolumn)\"}, {\"id\": \"SAL-2283\", \"resource\": \"running_time_day\", \"time_or_count\": 35.02333333333333, \"actual_or_best\": \"actual\", \"metric\": 0.834324391858125, \"running_time\": 126084, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"transpose_convolution\", \"hypercolumn\"], \"timestamp\": \"2018-10-06 07:20:01.508000+00:00\", \"timestamp_day\": \"2018-10-06\", \"text\": \"SAL-2283 | 0.8343 | (solution-10 , transpose_convolution , hypercolumn)\"}, {\"id\": \"SAL-2283\", \"resource\": \"running_time_day\", \"time_or_count\": 35.02333333333333, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 126084, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"transpose_convolution\", \"hypercolumn\"], \"timestamp\": \"2018-10-06 07:20:01.508000+00:00\", \"timestamp_day\": \"2018-10-06\", \"text\": \"SAL-2283 | 0.8343 | (solution-10 , transpose_convolution , hypercolumn)\"}, {\"id\": \"SAL-2297\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"actual\", \"metric\": 0.8491729360544954, \"running_time\": 46170, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"lovash\", \"stacking\"], \"timestamp\": \"2018-10-10 08:12:53.989000+00:00\", \"timestamp_day\": \"2018-10-10\", \"text\": \"SAL-2297 | 0.8492 | (solution-10 , lovash , stacking)\"}, {\"id\": \"SAL-2297\", \"resource\": \"experiment_count_day\", \"time_or_count\": 1.0, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 46170, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"lovash\", \"stacking\"], \"timestamp\": \"2018-10-10 08:12:53.989000+00:00\", \"timestamp_day\": \"2018-10-10\", \"text\": \"SAL-2297 | 0.8492 | (solution-10 , lovash , stacking)\"}, {\"id\": \"SAL-2297\", \"resource\": \"running_time_day\", \"time_or_count\": 12.825, \"actual_or_best\": \"actual\", \"metric\": 0.8491729360544954, \"running_time\": 46170, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"lovash\", \"stacking\"], \"timestamp\": \"2018-10-10 08:12:53.989000+00:00\", \"timestamp_day\": \"2018-10-10\", \"text\": \"SAL-2297 | 0.8492 | (solution-10 , lovash , stacking)\"}, {\"id\": \"SAL-2297\", \"resource\": \"running_time_day\", \"time_or_count\": 12.825, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 46170, \"owner\": \"czakon\", \"tags\": [\"solution-10\", \"lovash\", \"stacking\"], \"timestamp\": \"2018-10-10 08:12:53.989000+00:00\", \"timestamp_day\": \"2018-10-10\", \"text\": \"SAL-2297 | 0.8492 | (solution-10 , lovash , stacking)\"}, {\"id\": \"SAL-2305\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"actual\", \"metric\": 0.8305235895565731, \"running_time\": 127879, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"empty_vs_non_empty\"], \"timestamp\": \"2018-10-13 18:06:46.399000+00:00\", \"timestamp_day\": \"2018-10-13\", \"text\": \"SAL-2305 | 0.8305 | (solution-10 , empty_vs_non_empty)\"}, {\"id\": \"SAL-2305\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 127879, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"empty_vs_non_empty\"], \"timestamp\": \"2018-10-13 18:06:46.399000+00:00\", \"timestamp_day\": \"2018-10-13\", \"text\": \"SAL-2305 | 0.8305 | (solution-10 , empty_vs_non_empty)\"}, {\"id\": \"SAL-2305\", \"resource\": \"running_time_day\", \"time_or_count\": 84.69305555555556, \"actual_or_best\": \"actual\", \"metric\": 0.8305235895565731, \"running_time\": 127879, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"empty_vs_non_empty\"], \"timestamp\": \"2018-10-13 18:06:46.399000+00:00\", \"timestamp_day\": \"2018-10-13\", \"text\": \"SAL-2305 | 0.8305 | (solution-10 , empty_vs_non_empty)\"}, {\"id\": \"SAL-2305\", \"resource\": \"running_time_day\", \"time_or_count\": 84.69305555555556, \"actual_or_best\": \"best\", \"metric\": 0.8508485246866058, \"running_time\": 127879, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"empty_vs_non_empty\"], \"timestamp\": \"2018-10-13 18:06:46.399000+00:00\", \"timestamp_day\": \"2018-10-13\", \"text\": \"SAL-2305 | 0.8305 | (solution-10 , empty_vs_non_empty)\"}, {\"id\": \"SAL-2320\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"actual\", \"metric\": 0.851748300024162, \"running_time\": 38874, \"owner\": \"czakon\", \"tags\": [\"finetuning\", \"lovash\", \"solution-10\"], \"timestamp\": \"2018-10-13 18:29:47.825000+00:00\", \"timestamp_day\": \"2018-10-13\", \"text\": \"SAL-2320 | 0.8517 | (finetuning , lovash , solution-10)\"}, {\"id\": \"SAL-2320\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"best\", \"metric\": 0.851748300024162, \"running_time\": 38874, \"owner\": \"czakon\", \"tags\": [\"finetuning\", \"lovash\", \"solution-10\"], \"timestamp\": \"2018-10-13 18:29:47.825000+00:00\", \"timestamp_day\": \"2018-10-13\", \"text\": \"SAL-2320 | 0.8517 | (finetuning , lovash , solution-10)\"}, {\"id\": \"SAL-2320\", \"resource\": \"running_time_day\", \"time_or_count\": 84.69305555555556, \"actual_or_best\": \"actual\", \"metric\": 0.851748300024162, \"running_time\": 38874, \"owner\": \"czakon\", \"tags\": [\"finetuning\", \"lovash\", \"solution-10\"], \"timestamp\": \"2018-10-13 18:29:47.825000+00:00\", \"timestamp_day\": \"2018-10-13\", \"text\": \"SAL-2320 | 0.8517 | (finetuning , lovash , solution-10)\"}, {\"id\": \"SAL-2320\", \"resource\": \"running_time_day\", \"time_or_count\": 84.69305555555556, \"actual_or_best\": \"best\", \"metric\": 0.851748300024162, \"running_time\": 38874, \"owner\": \"czakon\", \"tags\": [\"finetuning\", \"lovash\", \"solution-10\"], \"timestamp\": \"2018-10-13 18:29:47.825000+00:00\", \"timestamp_day\": \"2018-10-13\", \"text\": \"SAL-2320 | 0.8517 | (finetuning , lovash , solution-10)\"}, {\"id\": \"SAL-2311\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"actual\", \"metric\": 0.8305229892561226, \"running_time\": 138142, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"empty_vs_non_empty\", \"hypercolumn\"], \"timestamp\": \"2018-10-13 21:45:05.158000+00:00\", \"timestamp_day\": \"2018-10-13\", \"text\": \"SAL-2311 | 0.8305 | (solution-10 , empty_vs_non_empty , hypercolumn)\"}, {\"id\": \"SAL-2311\", \"resource\": \"experiment_count_day\", \"time_or_count\": 3.0, \"actual_or_best\": \"best\", \"metric\": 0.851748300024162, \"running_time\": 138142, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"empty_vs_non_empty\", \"hypercolumn\"], \"timestamp\": \"2018-10-13 21:45:05.158000+00:00\", \"timestamp_day\": \"2018-10-13\", \"text\": \"SAL-2311 | 0.8305 | (solution-10 , empty_vs_non_empty , hypercolumn)\"}, {\"id\": \"SAL-2311\", \"resource\": \"running_time_day\", \"time_or_count\": 84.69305555555556, \"actual_or_best\": \"actual\", \"metric\": 0.8305229892561226, \"running_time\": 138142, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"empty_vs_non_empty\", \"hypercolumn\"], \"timestamp\": \"2018-10-13 21:45:05.158000+00:00\", \"timestamp_day\": \"2018-10-13\", \"text\": \"SAL-2311 | 0.8305 | (solution-10 , empty_vs_non_empty , hypercolumn)\"}, {\"id\": \"SAL-2311\", \"resource\": \"running_time_day\", \"time_or_count\": 84.69305555555556, \"actual_or_best\": \"best\", \"metric\": 0.851748300024162, \"running_time\": 138142, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"empty_vs_non_empty\", \"hypercolumn\"], \"timestamp\": \"2018-10-13 21:45:05.158000+00:00\", \"timestamp_day\": \"2018-10-13\", \"text\": \"SAL-2311 | 0.8305 | (solution-10 , empty_vs_non_empty , hypercolumn)\"}, {\"id\": \"SAL-2342\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"actual\", \"metric\": 0.8515728622175399, \"running_time\": 46191, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-10-18 03:22:50.232000+00:00\", \"timestamp_day\": \"2018-10-18\", \"text\": \"SAL-2342 | 0.8516 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2342\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"best\", \"metric\": 0.851748300024162, \"running_time\": 46191, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-10-18 03:22:50.232000+00:00\", \"timestamp_day\": \"2018-10-18\", \"text\": \"SAL-2342 | 0.8516 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2342\", \"resource\": \"running_time_day\", \"time_or_count\": 43.50611111111111, \"actual_or_best\": \"actual\", \"metric\": 0.8515728622175399, \"running_time\": 46191, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-10-18 03:22:50.232000+00:00\", \"timestamp_day\": \"2018-10-18\", \"text\": \"SAL-2342 | 0.8516 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2342\", \"resource\": \"running_time_day\", \"time_or_count\": 43.50611111111111, \"actual_or_best\": \"best\", \"metric\": 0.851748300024162, \"running_time\": 46191, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-10-18 03:22:50.232000+00:00\", \"timestamp_day\": \"2018-10-18\", \"text\": \"SAL-2342 | 0.8516 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2341\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"actual\", \"metric\": 0.8529482255869064, \"running_time\": 50412, \"owner\": \"neyo\", \"tags\": [\"stacking\", \"solution-10\"], \"timestamp\": \"2018-10-18 04:13:43.036000+00:00\", \"timestamp_day\": \"2018-10-18\", \"text\": \"SAL-2341 | 0.8529 | (stacking , solution-10)\"}, {\"id\": \"SAL-2341\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"best\", \"metric\": 0.8529482255869064, \"running_time\": 50412, \"owner\": \"neyo\", \"tags\": [\"stacking\", \"solution-10\"], \"timestamp\": \"2018-10-18 04:13:43.036000+00:00\", \"timestamp_day\": \"2018-10-18\", \"text\": \"SAL-2341 | 0.8529 | (stacking , solution-10)\"}, {\"id\": \"SAL-2341\", \"resource\": \"running_time_day\", \"time_or_count\": 43.50611111111111, \"actual_or_best\": \"actual\", \"metric\": 0.8529482255869064, \"running_time\": 50412, \"owner\": \"neyo\", \"tags\": [\"stacking\", \"solution-10\"], \"timestamp\": \"2018-10-18 04:13:43.036000+00:00\", \"timestamp_day\": \"2018-10-18\", \"text\": \"SAL-2341 | 0.8529 | (stacking , solution-10)\"}, {\"id\": \"SAL-2341\", \"resource\": \"running_time_day\", \"time_or_count\": 43.50611111111111, \"actual_or_best\": \"best\", \"metric\": 0.8529482255869064, \"running_time\": 50412, \"owner\": \"neyo\", \"tags\": [\"stacking\", \"solution-10\"], \"timestamp\": \"2018-10-18 04:13:43.036000+00:00\", \"timestamp_day\": \"2018-10-18\", \"text\": \"SAL-2341 | 0.8529 | (stacking , solution-10)\"}, {\"id\": \"SAL-2340\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"actual\", \"metric\": 0.8514730247488869, \"running_time\": 56271, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-10-18 05:50:45.435000+00:00\", \"timestamp_day\": \"2018-10-18\", \"text\": \"SAL-2340 | 0.8515 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2340\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"best\", \"metric\": 0.8529482255869064, \"running_time\": 56271, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-10-18 05:50:45.435000+00:00\", \"timestamp_day\": \"2018-10-18\", \"text\": \"SAL-2340 | 0.8515 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2340\", \"resource\": \"running_time_day\", \"time_or_count\": 43.50611111111111, \"actual_or_best\": \"actual\", \"metric\": 0.8514730247488869, \"running_time\": 56271, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-10-18 05:50:45.435000+00:00\", \"timestamp_day\": \"2018-10-18\", \"text\": \"SAL-2340 | 0.8515 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2340\", \"resource\": \"running_time_day\", \"time_or_count\": 43.50611111111111, \"actual_or_best\": \"best\", \"metric\": 0.8529482255869064, \"running_time\": 56271, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-10-18 05:50:45.435000+00:00\", \"timestamp_day\": \"2018-10-18\", \"text\": \"SAL-2340 | 0.8515 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2343\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"actual\", \"metric\": 0.8529482255869064, \"running_time\": 3748, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-10-18 08:15:45.396000+00:00\", \"timestamp_day\": \"2018-10-18\", \"text\": \"SAL-2343 | 0.8529 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2343\", \"resource\": \"experiment_count_day\", \"time_or_count\": 4.0, \"actual_or_best\": \"best\", \"metric\": 0.8529482255869064, \"running_time\": 3748, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-10-18 08:15:45.396000+00:00\", \"timestamp_day\": \"2018-10-18\", \"text\": \"SAL-2343 | 0.8529 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2343\", \"resource\": \"running_time_day\", \"time_or_count\": 43.50611111111111, \"actual_or_best\": \"actual\", \"metric\": 0.8529482255869064, \"running_time\": 3748, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-10-18 08:15:45.396000+00:00\", \"timestamp_day\": \"2018-10-18\", \"text\": \"SAL-2343 | 0.8529 | (solution-10 , stacking)\"}, {\"id\": \"SAL-2343\", \"resource\": \"running_time_day\", \"time_or_count\": 43.50611111111111, \"actual_or_best\": \"best\", \"metric\": 0.8529482255869064, \"running_time\": 3748, \"owner\": \"neyo\", \"tags\": [\"solution-10\", \"stacking\"], \"timestamp\": \"2018-10-18 08:15:45.396000+00:00\", \"timestamp_day\": \"2018-10-18\", \"text\": \"SAL-2343 | 0.8529 | (solution-10 , stacking)\"}]}};\n var embedOpt = {\"mode\": \"vega-lite\"};\n\n function showError(el, error){\n el.innerHTML = ('<div class=\"error\" style=\"color:red;\">'\n + '<p>JavaScript Error: ' + error.message + '</p>'\n + \"<p>This usually means there's a typo in your chart specification. \"\n + \"See the javascript console for the full traceback.</p>\"\n + '</div>');\n throw error;\n }\n const el = document.getElementById('vis');\n vegaEmbed(\"#vis\", spec, embedOpt)\n .catch(error => showError(el, error));\n\n </script>\n</body>\n</html>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0dbed7d00289956ff0d542d5fe7fae655e563c1 | 38,626 | ipynb | Jupyter Notebook | kaggle_nn_prototype.ipynb | joshsia/monkey-neural-decoder | 665e109a9d8b25c30fe75ce13aee3da9a6a85de6 | [
"MIT"
] | null | null | null | kaggle_nn_prototype.ipynb | joshsia/monkey-neural-decoder | 665e109a9d8b25c30fe75ce13aee3da9a6a85de6 | [
"MIT"
] | null | null | null | kaggle_nn_prototype.ipynb | joshsia/monkey-neural-decoder | 665e109a9d8b25c30fe75ce13aee3da9a6a85de6 | [
"MIT"
] | null | null | null | 67.646235 | 19,804 | 0.754777 | [
[
[
"import numpy as np\nimport pandas as pd\nimport pickle\nimport matplotlib.pyplot as plt\nimport torch\nfrom torch import nn, optim\nfrom torchvision import transforms, utils\nfrom torch.utils.data import TensorDataset, DataLoader\nimport time\n\nfrom sklearn.model_selection import train_test_split\n\n%matplotlib inline",
"_____no_output_____"
],
[
"with open(\"../input/monkeyspikes/training_data.pickle\", \"rb\") as f:\n training_data = pickle.load(f)\n\nwith open(\"../input/monkeyspikes/training_arm.pickle\", \"rb\") as f:\n training_arm = pickle.load(f)\n \nwith open(\"../input/monkeyspikes/mean_trajectory.pickle\", \"rb\") as f:\n mean_trajectory = pickle.load(f)",
"_____no_output_____"
],
[
"device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')",
"_____no_output_____"
],
[
"data = np.concatenate((training_data, training_arm), axis=1)\ndata.shape",
"_____no_output_____"
],
[
"BATCH_SIZE = 24\n\nX_train, X_test, y_train_arm, y_test_arm = train_test_split(\n data[:, :297], data[:, 297:],\n test_size=0.3, random_state=2022\n)\n\ny_train = y_train_arm[:, 0]\ny_test = y_test_arm[:, 0]\n\narm_train = y_train_arm[:, 1:]\narm_test = y_test_arm[:, 1:]\n\ntrain_dataset = TensorDataset(torch.Tensor(X_train), torch.Tensor(arm_train))\nvalid_dataset = TensorDataset(torch.Tensor(X_test), torch.Tensor(arm_test))\n\ntrain_dataloader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True)\nvalid_dataloader = DataLoader(valid_dataset, batch_size=BATCH_SIZE, shuffle=True)",
"_____no_output_____"
],
[
"X, y = next(iter(train_dataloader))\nprint(X.shape)\nprint(y.shape)",
"torch.Size([24, 297])\ntorch.Size([24, 3000])\n"
],
[
"class NeuralDecoder(nn.Module):\n def __init__(self, input_size, output_size):\n super().__init__()\n self.main = nn.Sequential(\n nn.Linear(input_size, 500),\n nn.ReLU(),\n nn.Linear(500, 1_000),\n nn.ReLU(),\n nn.Linear(1_000, 5_000),\n nn.ReLU(),\n nn.Linear(5_000, 10_000),\n nn.ReLU(),\n nn.Linear(10_000, 15_000),\n nn.ReLU(),\n nn.Linear(15_000, 10_000),\n nn.ReLU(),\n nn.Linear(10_000, 5_000),\n nn.ReLU(),\n nn.Linear(5_000, output_size)\n )\n\n def forward(self, x):\n out = self.main(x)\n return out\n\n\ndef trainer(model, criterion, optimizer, trainloader, validloader, epochs=50, verbose=True):\n \"\"\"Simple training wrapper for PyTorch network.\"\"\"\n \n train_loss = []\n valid_loss = []\n for epoch in range(epochs):\n losses = 0\n for X, y in trainloader:\n X, y = X.to(device), y.to(device)\n optimizer.zero_grad() # Clear gradients w.r.t. parameters\n y_hat = model(X.reshape(X.shape[0], -1))\n loss = criterion(y_hat, y) # Calculate loss\n loss.backward() # Getting gradients w.r.t. parameters\n optimizer.step() # Update parameters\n losses += loss.item() # Add loss for this batch to running total\n train_loss.append(losses / len(trainloader))\n \n # Validation\n model.eval()\n valid_losses = 0\n with torch.no_grad():\n for X, y in validloader:\n X, y = X.to(device), y.to(device)\n y_hat = model(X)\n loss = criterion(y_hat, y)\n valid_losses += loss.item()\n valid_loss.append(valid_losses / len(validloader))\n \n model.train()\n \n if verbose:\n print(f\"Epoch: {epoch + 1}, \"\n f\"Train loss: {losses / len(trainloader):.2f}, \"\n f\"Valid loss: {valid_losses / len(validloader):.2f}\")\n \n results = {\"train_loss\": train_loss,\n \"valid_loss\": valid_loss}\n return results",
"_____no_output_____"
],
[
"torch.manual_seed(2022)\n\nmodel = NeuralDecoder(input_size=297, output_size=3_000)\nmodel.to(device);\n\ncriterion = nn.MSELoss()\noptimizer = torch.optim.SGD(model.parameters(), lr=1e-4)\n\nprint(time.strftime(\"%H:%M:%S\", time.localtime()))\ntrainer(model, criterion, optimizer, train_dataloader, valid_dataloader, verbose=True)\nprint(time.strftime(\"%H:%M:%S\", time.localtime()))",
"07:11:13\nEpoch: 1, Train loss: 1659.24, Valid loss: 1176.63\nEpoch: 2, Train loss: 1067.38, Valid loss: 964.14\nEpoch: 3, Train loss: 951.51, Valid loss: 931.33\nEpoch: 4, Train loss: 902.84, Valid loss: 1014.76\nEpoch: 5, Train loss: 864.50, Valid loss: 838.92\nEpoch: 6, Train loss: 844.14, Valid loss: 887.05\nEpoch: 7, Train loss: 822.53, Valid loss: 931.35\nEpoch: 8, Train loss: 806.81, Valid loss: 823.22\nEpoch: 9, Train loss: 793.24, Valid loss: 804.43\nEpoch: 10, Train loss: 780.31, Valid loss: 944.74\nEpoch: 11, Train loss: 770.70, Valid loss: 815.95\nEpoch: 12, Train loss: 758.65, Valid loss: 798.31\nEpoch: 13, Train loss: 754.04, Valid loss: 790.12\nEpoch: 14, Train loss: 746.75, Valid loss: 779.58\nEpoch: 15, Train loss: 739.91, Valid loss: 808.60\nEpoch: 16, Train loss: 736.20, Valid loss: 784.34\nEpoch: 17, Train loss: 730.93, Valid loss: 778.47\nEpoch: 18, Train loss: 721.84, Valid loss: 769.82\nEpoch: 19, Train loss: 718.69, Valid loss: 794.38\nEpoch: 20, Train loss: 714.98, Valid loss: 809.27\nEpoch: 21, Train loss: 712.80, Valid loss: 773.44\nEpoch: 22, Train loss: 703.65, Valid loss: 768.64\nEpoch: 23, Train loss: 703.44, Valid loss: 780.94\nEpoch: 24, Train loss: 698.13, Valid loss: 779.53\nEpoch: 25, Train loss: 693.15, Valid loss: 764.92\nEpoch: 26, Train loss: 696.89, Valid loss: 774.46\nEpoch: 27, Train loss: 693.65, Valid loss: 767.72\nEpoch: 28, Train loss: 691.06, Valid loss: 780.17\nEpoch: 29, Train loss: 687.20, Valid loss: 803.56\nEpoch: 30, Train loss: 688.62, Valid loss: 776.69\nEpoch: 31, Train loss: 683.05, Valid loss: 782.98\nEpoch: 32, Train loss: 678.53, Valid loss: 768.61\nEpoch: 33, Train loss: 679.25, Valid loss: 760.32\nEpoch: 34, Train loss: 675.38, Valid loss: 770.24\nEpoch: 35, Train loss: 673.95, Valid loss: 773.98\nEpoch: 36, Train loss: 671.97, Valid loss: 780.08\nEpoch: 37, Train loss: 670.71, Valid loss: 767.79\nEpoch: 38, Train loss: 669.59, Valid loss: 769.78\nEpoch: 39, Train loss: 670.08, Valid loss: 779.06\nEpoch: 40, Train loss: 667.82, Valid loss: 760.34\nEpoch: 41, Train loss: 666.67, Valid loss: 780.88\nEpoch: 42, Train loss: 664.74, Valid loss: 770.73\nEpoch: 43, Train loss: 665.22, Valid loss: 759.18\nEpoch: 44, Train loss: 662.95, Valid loss: 762.54\nEpoch: 45, Train loss: 662.12, Valid loss: 763.77\nEpoch: 46, Train loss: 662.11, Valid loss: 771.67\nEpoch: 47, Train loss: 662.68, Valid loss: 767.63\nEpoch: 48, Train loss: 660.94, Valid loss: 768.76\nEpoch: 49, Train loss: 660.09, Valid loss: 762.42\nEpoch: 50, Train loss: 658.25, Valid loss: 759.71\n07:47:14\n"
],
[
"torch.save(model.state_dict(), \"trained_nn.pt\")",
"_____no_output_____"
],
[
"y_hat = model(torch.Tensor(X_test).to(device))\ny_hat = y_hat.cpu().detach().numpy()\n\nrmse = np.sqrt(np.mean((arm_test - y_hat)**2))\nprint(rmse)",
"27.562825749712523\n"
],
[
"good_examples = 0\nbad_examples = 0\n\nax_good = plt.subplot(121)\nax_bad = plt.subplot(122)\n\nfor X, y in valid_dataloader:\n X, y = X.to(device), y.to(device)\n prediction = model(X)\n\n y = y.cpu().detach().numpy()\n prediction = prediction.cpu().detach().numpy()\n \n while good_examples < 30 and bad_examples < 30:\n for i in range(X.shape[0]):\n rmse = np.sqrt(np.mean((prediction[i, :] - y[i, :])**2))\n if rmse < 5:\n good_examples += 1\n ax_good.plot(y[i, :1000], y[i, 1000:2000], color=\"r\")\n ax_good.plot(prediction[i, :1000], prediction[i, 1000:2000], color=\"b\")\n if rmse > 30:\n bad_examples += 1\n ax_bad.plot(y[i, :1000], y[i, 1000:2000], color=\"r\")\n ax_bad.plot(prediction[i, :1000], prediction[i, 1000:2000], color=\"b\")\n\nax_good.title.set_text(\"Good predictions\")\nax_bad.title.set_text(\"Bad predictions\")\nax_good.set_xlim([-150, 150])\nax_good.set_ylim([-100, 100])\nax_bad.set_xlim([-150, 150])\nax_bad.set_ylim([-100, 100])\nplt.show()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0dbf329df18dfed6a897e43478a2f28f13f71fc | 256,601 | ipynb | Jupyter Notebook | Auto_ViML_Demo.ipynb | manugarri/Auto_ViML | e1d79e021b112fdf652b3c8830a536ec5bcd61b3 | [
"Apache-2.0"
] | null | null | null | Auto_ViML_Demo.ipynb | manugarri/Auto_ViML | e1d79e021b112fdf652b3c8830a536ec5bcd61b3 | [
"Apache-2.0"
] | null | null | null | Auto_ViML_Demo.ipynb | manugarri/Auto_ViML | e1d79e021b112fdf652b3c8830a536ec5bcd61b3 | [
"Apache-2.0"
] | null | null | null | 462.344144 | 91,020 | 0.923325 | [
[
[
"<a href=\"https://colab.research.google.com/github/AutoViML/Auto_ViML/blob/master/Auto_ViML_Demo.ipynb\" target=\"_parent\">\n <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n</a>",
"_____no_output_____"
]
],
[
[
"import pandas as pd\ndatapath = 'https://web.stanford.edu/class/archive/cs/cs109/cs109.1166/stuff/'",
"_____no_output_____"
],
[
"#### THIS SHOULD print Version Number. If it doesn't, it means you don't have latest version ## \n### If you want to see the sitepackages version use this\nfrom autoviml.Auto_ViML import Auto_ViML",
"[nltk_data] Downloading collection 'popular'\n[nltk_data] | \n[nltk_data] | Downloading package cmudict to\n[nltk_data] | /home/jupyter/nltk_data...\n[nltk_data] | Package cmudict is already up-to-date!\n[nltk_data] | Downloading package gazetteers to\n[nltk_data] | /home/jupyter/nltk_data...\n[nltk_data] | Package gazetteers is already up-to-date!\n[nltk_data] | Downloading package genesis to\n[nltk_data] | /home/jupyter/nltk_data...\n[nltk_data] | Package genesis is already up-to-date!\n[nltk_data] | Downloading package gutenberg to\n[nltk_data] | /home/jupyter/nltk_data...\n[nltk_data] | Package gutenberg is already up-to-date!\n[nltk_data] | Downloading package inaugural to\n[nltk_data] | /home/jupyter/nltk_data...\n[nltk_data] | Package inaugural is already up-to-date!\n[nltk_data] | Downloading package movie_reviews to\n[nltk_data] | /home/jupyter/nltk_data...\n[nltk_data] | Package movie_reviews is already up-to-date!\n[nltk_data] | Downloading package names to\n[nltk_data] | /home/jupyter/nltk_data...\n[nltk_data] | Package names is already up-to-date!\n[nltk_data] | Downloading package shakespeare to\n[nltk_data] | /home/jupyter/nltk_data...\n[nltk_data] | Package shakespeare is already up-to-date!\n[nltk_data] | Downloading package stopwords to\n[nltk_data] | /home/jupyter/nltk_data...\n[nltk_data] | Package stopwords is already up-to-date!\n[nltk_data] | Downloading package treebank to\n[nltk_data] | /home/jupyter/nltk_data...\n[nltk_data] | Package treebank is already up-to-date!\n[nltk_data] | Downloading package twitter_samples to\n[nltk_data] | /home/jupyter/nltk_data...\n[nltk_data] | Package twitter_samples is already up-to-date!\n[nltk_data] | Downloading package omw to /home/jupyter/nltk_data...\n[nltk_data] | Package omw is already up-to-date!\n[nltk_data] | Downloading package wordnet to\n[nltk_data] | /home/jupyter/nltk_data...\n[nltk_data] | Package wordnet is already up-to-date!\n[nltk_data] | Downloading package wordnet_ic to\n[nltk_data] | /home/jupyter/nltk_data...\n[nltk_data] | Package wordnet_ic is already up-to-date!\n[nltk_data] | Downloading package words to\n[nltk_data] | /home/jupyter/nltk_data...\n[nltk_data] | Package words is already up-to-date!\n[nltk_data] | Downloading package maxent_ne_chunker to\n[nltk_data] | /home/jupyter/nltk_data...\n[nltk_data] | Package maxent_ne_chunker is already up-to-date!\n[nltk_data] | Downloading package punkt to\n[nltk_data] | /home/jupyter/nltk_data...\n[nltk_data] | Package punkt is already up-to-date!\n[nltk_data] | Downloading package snowball_data to\n[nltk_data] | /home/jupyter/nltk_data...\n[nltk_data] | Package snowball_data is already up-to-date!\n[nltk_data] | Downloading package averaged_perceptron_tagger to\n[nltk_data] | /home/jupyter/nltk_data...\n[nltk_data] | Package averaged_perceptron_tagger is already up-\n[nltk_data] | to-date!\n[nltk_data] | \n[nltk_data] Done downloading collection popular\n"
],
[
"df = pd.read_csv(datapath+'titanic.csv')\n#test = train[-15:]\n#test = pd.read_csv(datapath+'test.csv')\nprint(train.shape)\n#print(test.shape)\nprint(train.head())\ntarget = 'Survived'",
"(887, 8)\n Survived Pclass Name \\\n0 0 3 Mr. Owen Harris Braund \n1 1 1 Mrs. John Bradley (Florence Briggs Thayer) Cum... \n2 1 3 Miss. Laina Heikkinen \n3 1 1 Mrs. Jacques Heath (Lily May Peel) Futrelle \n4 0 3 Mr. William Henry Allen \n\n Sex Age Siblings/Spouses Aboard Parents/Children Aboard Fare \n0 male 22.00 1 0 7.25 \n1 female 38.00 1 0 71.28 \n2 female 26.00 0 0 7.92 \n3 female 35.00 1 0 53.10 \n4 male 35.00 0 0 8.05 \n"
],
[
"num = int(0.9*df.shape[0])\ntrain = df[:num]\ntest = df[num:]\nsample_submission=''\nscoring_parameter = 'balanced-accuracy'",
"_____no_output_____"
],
[
"#### If Boosting_Flag = True => XGBoost, Fase=>ExtraTrees, None=>Linear Model\nm, feats, trainm, testm = Auto_ViML(train, target, test, sample_submission,\n scoring_parameter=scoring_parameter,\n hyper_param='GS',feature_reduction=True,\n Boosting_Flag=True,Binning_Flag=False,\n Add_Poly=0, Stacking_Flag=False, \n Imbalanced_Flag=False, \n verbose=1) ",
"############## D A T A S E T A N A L Y S I S #######################\nTraining Set Shape = (798, 8)\n Training Set Memory Usage = 0.05 MB\nTest Set Shape = (89, 8)\n Test Set Memory Usage = 0.01 MB\nSingle_Label Target: ['Survived']\nShuffling the data set before training\n Class -> Counts -> Percent\n 0: 489 -> 61.3%\n 1: 309 -> 38.7%\nUsing GridSearchCV for Hyper Parameter Tuning. This is slow. Switch to RS for faster tuning...\n Target Survived is already numeric. No transformation done.\n############## C L A S S I F Y I N G V A R I A B L E S ####################\nClassifying variables in data set...\n Number of Numeric Columns = 2\n Number of Integer-Categorical Columns = 3\n Number of String-Categorical Columns = 0\n Number of Factor-Categorical Columns = 0\n Number of String-Boolean Columns = 1\n Number of Numeric-Boolean Columns = 0\n Number of Discrete String Columns = 1\n Number of NLP String Columns = 0\n Number of Date Time Columns = 0\n Number of ID Columns = 0\n Number of Columns to Delete = 0\n 7 Predictors classified...\n This does not include the Target column(s)\n No variables removed since no ID or low-information variables found in data set\nNumber of GPUs = 2\nNo GPU available on this device\n############# D A T A P R E P A R A T I O N #############\nNo Missing Values in train data set\nTest data has no missing values. Continuing...\nCompleted Scaling of Train and Test Data using MinMaxScaler(copy=True, feature_range=(0, 1)) ...\nBinary_Classification problem: hyperparameters are being optimized for balanced_accuracy\n############## F E A T U R E S E L E C T I O N ####################\nRemoving highly correlated features among 5 variables using pearson correlation...\n No variables were removed since no highly correlated variables found in data\n\n############# PROCESSING T A R G E T = Survived ##########################\nNo categorical feature reduction done. All 5 Categorical vars selected \n############## F E A T U R E S E L E C T I O N ####################\nRemoving highly correlated features among 2 variables using pearson correlation...\n No variables were removed since no highly correlated variables found in data\n Adding 5 categorical variables to reduced numeric variables of 2\n############## F E A T U R E S E L E C T I O N ####################\nCurrent number of predictors = 7 \n Finding Important Features using Boosted Trees algorithm...\n using 7 variables...\n using 5 variables...\n using 3 variables...\n using 1 variables...\nFound 7 important features\nStarting Feature Engineering now...\n No Entropy Binning specified or there are no numeric vars in data set to Bin\n############### M O D E L B U I L D I N G ####################\nRows in Train data set = 718\n Features in Train data set = 7\n Rows in held-out data set = 80\nFinding Best Model and Hyper Parameters for Target: Survived...\n Baseline Accuracy Needed for Model = 61.28%\nCPU Count = 62 in this device\nUsing XGBoost Model, Estimated Training time = 0.003 mins\n Actual training time (in seconds): 6\n########### S I N G L E M O D E L R E S U L T S #################\n5-fold Cross Validation balanced-accuracy = 80.5%\n Best Parameters for Model = {'max_depth': 10, 'learning_rate': 0.1, 'gamma': 0}\nFinding Best Threshold for Highest F1 Score...\n"
],
[
"def reverse_dict(map_dict):\n return dict([(v,k) for (k,v) in map_dict.items()])\n# Use this to Test Classification Problems Only ####\nret_dict = {0: 0, 1: 1}\nmap_dict = reverse_dict(ret_dict)\nm_thresh = 0.21\nmodelname='XGBoost'\n#####################################################################\nfrom sklearn.metrics import classification_report, confusion_matrix\nfrom sklearn.metrics import balanced_accuracy_score\ntry:\n print('Normal Balanced Accuracy = %0.2f%%' %(\n 100*balanced_accuracy_score(test[target].map(map_dict).values, (\n testm[target+'_proba_'+'1']>0.5).astype(int).values)))\n print('Test results since target variable is present in test data:')\n print(confusion_matrix(test[target].map(map_dict).values, (\n testm[target+'_proba_'+'1']>0.5).astype(int).values))\n print(classification_report(test[target].map(map_dict).values, (\n testm[target+'_proba_'+'1']>0.5).astype(int).values))\n print('Modified Threshold Balanced Accuracy = %0.2f%%' %(\n 100*balanced_accuracy_score(test[target].map(map_dict).values, (\n testm[target+'_proba_'+'1']>m_thresh).astype(int).values)))\n print(confusion_matrix(test[target].map(map_dict).values, (\n testm[target+'_proba_'+'1']>m_thresh).astype(int).values))\n print(classification_report(test[target].map(map_dict).values, (\n testm[target+'_proba_'+'1']>m_thresh).astype(int).values))\nexcept:\n print('No target variable present in test data. No results')\n",
"Normal Balanced Accuracy = 84.04%\nTest results since target variable is present in test data:\n[[50 6]\n [ 7 26]]\n precision recall f1-score support\n\n 0 0.88 0.89 0.88 56\n 1 0.81 0.79 0.80 33\n\n accuracy 0.85 89\n macro avg 0.84 0.84 0.84 89\nweighted avg 0.85 0.85 0.85 89\n\nModified Threshold Balanced Accuracy = 79.03%\n[[41 15]\n [ 5 28]]\n precision recall f1-score support\n\n 0 0.89 0.73 0.80 56\n 1 0.65 0.85 0.74 33\n\n accuracy 0.78 89\n macro avg 0.77 0.79 0.77 89\nweighted avg 0.80 0.78 0.78 89\n\n"
]
],
[
[
"##### REGRESSION ##############\nfrom autoviml.Auto_ViML import print_regression_model_stats\n#### Use this to Test Regression Problems Only #####\nimport numpy as np\ndef rmse(results, y_cv):\n return np.sqrt(np.mean((results - y_cv)**2, axis=0))\n##############################################################\nmodelname='CatBoost'\nprint(rmse(test[target].values,testm_home[target+'_'+modelname+'_predictions'].values))\nprint_regression_model_stats(test[target].values,testm_home[target+'_'+modelname+'_predictions'].values)\n###############################################################",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"raw"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"raw"
]
] |
d0dc064bc3add94f55292f5d10615826f33b6879 | 5,319 | ipynb | Jupyter Notebook | machine-learning-intro/exercise/random-forests.ipynb | hmdprs/data-scientist | 0eccf20a809cd5239843ccfbf75a34d03a95a2f9 | [
"MIT"
] | 5 | 2020-08-08T11:41:04.000Z | 2021-05-29T19:41:05.000Z | machine-learning-intro/exercise/random-forests.ipynb | hmdprs/data-scientist | 0eccf20a809cd5239843ccfbf75a34d03a95a2f9 | [
"MIT"
] | null | null | null | machine-learning-intro/exercise/random-forests.ipynb | hmdprs/data-scientist | 0eccf20a809cd5239843ccfbf75a34d03a95a2f9 | [
"MIT"
] | null | null | null | 5,319 | 5,319 | 0.744877 | [
[
[
"**[Introduction to Machine Learning Home Page](https://www.kaggle.com/learn/intro-to-machine-learning)**\n\n---\n",
"_____no_output_____"
],
[
"## Recap\nHere's the code you've written so far.",
"_____no_output_____"
]
],
[
[
"# code you have previously used\n\n# load data\nimport pandas as pd\niowa_file_path = '../input/home-data-for-ml-course/train.csv'\nhome_data = pd.read_csv(iowa_file_path)\n\n# create target object and call it y\ny = home_data['SalePrice']\n\n# create X\nfeatures = ['LotArea', 'YearBuilt', '1stFlrSF', '2ndFlrSF', 'FullBath', 'BedroomAbvGr', 'TotRmsAbvGrd']\nX = home_data[features]\n\n# split into validation and training data\nfrom sklearn.model_selection import train_test_split\ntrain_X, val_X, train_y, val_y = train_test_split(X, y, random_state=1)\n\n# specify Model\nfrom sklearn.tree import DecisionTreeRegressor\niowa_model = DecisionTreeRegressor(random_state=1)\n\n# fit Model\niowa_model.fit(train_X, train_y)\n\n# make validation predictions\nval_predictions = iowa_model.predict(val_X)\n\n# calculate mean absolute error\nfrom sklearn.metrics import mean_absolute_error\nval_mae = mean_absolute_error(val_y, val_predictions)\nprint(f\"Validation MAE when not specifying max_leaf_nodes: {val_mae:,.0f}\")\n# print(\"Validation MAE when not specifying max_leaf_nodes: {:,.0f}\".format(val_mae))\n\n# using best value for max_leaf_nodes\niowa_model = DecisionTreeRegressor(max_leaf_nodes=100, random_state=1)\niowa_model.fit(train_X, train_y)\nval_predictions = iowa_model.predict(val_X)\nval_mae = mean_absolute_error(val_y, val_predictions)\nprint(f\"Validation MAE for best value of max_leaf_nodes: {val_mae:,.0f}\")\n\n# set up code checking\nfrom learntools.core import binder\nbinder.bind(globals())\nfrom learntools.machine_learning.ex6 import *\nprint(\"\\nSetup complete\")",
"Validation MAE when not specifying max_leaf_nodes: 29,653\nValidation MAE for best value of max_leaf_nodes: 27,283\n\nSetup complete\n"
]
],
[
[
"# Exercises\nData science isn't always this easy. But replacing the decision tree with a Random Forest is going to be an easy win.",
"_____no_output_____"
],
[
"## Step 1: Use a Random Forest",
"_____no_output_____"
]
],
[
[
"from sklearn.ensemble import RandomForestRegressor\n\n# specify model. set random_state to 1\nrf_model = RandomForestRegressor(random_state=1)\n\n# fit model\nrf_model.fit(train_X, train_y)\n\n# calculate the mean absolute error of your Random Forest model on the validation data\nval_ft_predictions = rf_model.predict(val_X)\nrf_val_mae = mean_absolute_error(val_y, val_ft_predictions)\n\nprint(f\"Validation MAE for Random Forest Model: {rf_val_mae}\")\n\n# Check your answer\nstep_1.check()",
"Validation MAE for Random Forest Model: 22762.42931506849\n"
],
[
"# The lines below will show you a hint or the solution.\n# step_1.hint() \n# step_1.solution()",
"_____no_output_____"
]
],
[
[
"So far, you have followed specific instructions at each step of your project. This helped learn key ideas and build your first model, but now you know enough to try things on your own. \n\nMachine Learning competitions are a great way to try your own ideas and learn more as you independently navigate a machine learning project. \n\n# Keep Going\n\nYou are ready for **[Machine Learning Competitions](https://www.kaggle.com/kernels/fork/1259198).**\n",
"_____no_output_____"
],
[
"---\n**[Introduction to Machine Learning Home Page](https://www.kaggle.com/learn/intro-to-machine-learning)**\n\n\n\n\n\n*Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum) to chat with other Learners.*",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d0dc076e9d2bb990ab13b23f0e065048f44081e9 | 47,173 | ipynb | Jupyter Notebook | notebooks/plot_move_frequency.ipynb | brannondorsey/ChessEmbeddings | e521638b39ea4af1efa4c62ab519406324fea385 | [
"MIT"
] | 1 | 2019-08-18T12:47:01.000Z | 2019-08-18T12:47:01.000Z | notebooks/plot_move_frequency.ipynb | brannondorsey/ChessEmbeddings | e521638b39ea4af1efa4c62ab519406324fea385 | [
"MIT"
] | null | null | null | notebooks/plot_move_frequency.ipynb | brannondorsey/ChessEmbeddings | e521638b39ea4af1efa4c62ab519406324fea385 | [
"MIT"
] | 1 | 2019-11-10T20:37:23.000Z | 2019-11-10T20:37:23.000Z | 122.846354 | 14,026 | 0.859877 | [
[
[
"We will use this notebook to calculate and visualize statistics of our chess move dataset. This will allow us to better understand our limitations and help diagnose problems we may encounter down the road when training/defining our model.",
"_____no_output_____"
]
],
[
[
"import pdb\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"def get_move_freqs(moves, sort=True):\n freq_dict = {}\n for move in moves:\n if move not in freq_dict:\n freq_dict[move] = 0\n freq_dict[move] = freq_dict[move] + 1\n tuples = [(w, c) for w, c in freq_dict.items()]\n if sort:\n tuples = sorted(tuples, key=lambda x: -x[1])\n return (tuples, moves)\n\ndef plot_frequency(counts, move_limit=1000):\n # limit to the n most frequent moves\n n = 1000\n counts = counts[0:n]\n # from: http://stackoverflow.com/questions/30690619/python-histogram-using-matplotlib-on-top-words\n moves = [x[0] for x in counts]\n values = [int(x[1]) for x in counts]\n bar = plt.bar(range(len(moves)), values, color='green', alpha=0.4)\n\n plt.xlabel('Move Index')\n plt.ylabel('Frequency')\n plt.title('Move Frequency Chart')\n\n plt.show()\n \ndef plot_uniq_over_count(moves, interval=0.01):\n \n xs, ys = [], []\n for i in range(0, len(moves), int(len(moves) * interval)):\n chunk = moves[0:i]\n uniq = list(set(chunk))\n xs.append(len(chunk))\n ys.append(len(uniq))\n \n plt.plot(xs, ys)\n plt.ticklabel_format(style='sci', axis='x', scilimits=(0, 0))\n plt.xlabel('Moves')\n plt.ylabel('Unique Moves')\n plt.show()\n\ndef plot_game_lengths(game_lengths):\n \n xs = [g[0] for g in game_lengths]\n ys = [g[1] for g in game_lengths]\n \n bar = plt.bar(xs, ys, color='blue', alpha=0.4)\n\n plt.xlabel('Half-moves per game')\n plt.ylabel('Frequency')\n plt.title('Game Length')\n\n plt.show()\n \ndef plot_repeat_states(moves):\n \n uniq_states = {}\n moves_in_game = ''\n \n for move in moves:\n \n moves_in_game = moves_in_game + ' ' + move\n \n if moves_in_game not in uniq_states:\n uniq_states[moves_in_game] = 0\n \n uniq_states[moves_in_game] = uniq_states[moves_in_game] + 1\n \n if is_game_over_move(move):\n moves_in_game = ''\n \n vals = []\n d = {}\n \n for state, count in sorted(uniq_states.items(), key=lambda x: (-x[1], x[0])):\n vals.append((count, state))\n# move_count = len(state.split())\n# if move_count not in d:\n# d[move_count] = 0\n# d[move_count] = d[move_count] + 1\n \n vals.append([c for c, s in vals])\n \n plt.plot(vals)\n plt.xlim([0, 100])\n plt.xlabel('Board State')\n plt.ylabel('Frequency')\n plt.title('Frequency of Board State')\n plt.show()\n \n# vals = [(length, count) for length, count in sorted(d.items(), key=lambda x: -x[0])]\n# pdb.set_trace()\n# plt.bar(vals)\n# plt.xlim([0, 1000])\n# plt.xlabel('Moves in State')\n# plt.ylabel('Frequency')\n# print('{} uniq board states'.format(len(list(uniq_states.keys()))))\n \ndef get_game_lengths(moves):\n \n game_lengths = {}\n total_games = 0\n current_move = 1\n for move in moves:\n if is_game_over_move(move):\n if current_move not in game_lengths:\n game_lengths[current_move] = 0\n game_lengths[current_move] = game_lengths[current_move] + 1\n current_move = 1\n total_games = total_games + 1\n else:\n current_move = current_move + 1\n print(total_games)\n return [(k, v) for k, v in game_lengths.items()], total_games\n \ndef is_game_over_move(move):\n return move in ('0-1', '1-0', '1/2-1/2')",
"_____no_output_____"
]
],
[
[
"Load our concatonated moves data.",
"_____no_output_____"
]
],
[
[
"with open('../data/train_moves.txt', 'r') as f:\n moves = f.read().split(' ')\n print('{} moves loaded'.format(len(moves)))\n counts, moves = get_move_freqs(moves)\n game_lengths, total_games = get_game_lengths(moves)",
"10108811 moves loaded\n126872\n"
],
[
"# plot_repeat_states(moves)",
"_____no_output_____"
]
],
[
[
"## Plot Move Frequency\nHere we can see which moves appear most frequently in the dataset. These moves are the most popular moves played by chess champions.",
"_____no_output_____"
]
],
[
[
"plot_frequency(counts)",
"_____no_output_____"
]
],
[
[
"We will list the most common few moves along with what percentage of the entire moves dataset this move represents.",
"_____no_output_____"
]
],
[
[
"top_n = 10\nfor w in counts[0:top_n]:\n print((w[0]).ljust(8), '{:.2f}%'.format((w[1]/len(moves)) * 100.00))",
"O-O 2.00%\nNf6 1.40%\nNf3 1.31%\nd4 1.26%\nNc3 1.15%\nd5 1.09%\ne4 1.09%\nc4 1.07%\nc5 1.01%\ne5 0.94%\n"
]
],
[
[
"## Plot Unique Moves\nHere we compare the number of unique moves over the total move count. Take notice that the number of unique moves converges towards a constant as the number of total moves increase. This would suggest that there is a subset of all possible moves that actually make sense for a chess champion to play.",
"_____no_output_____"
]
],
[
[
"plot_uniq_over_count(moves)",
"_____no_output_____"
]
],
[
[
"## Plot Game Lengths",
"_____no_output_____"
]
],
[
[
"plot_game_lengths(game_lengths)",
"_____no_output_____"
],
[
"top_n = 10\nsorted_lengths = sorted(game_lengths, key=lambda x: -x[1])\nfor l in sorted_lengths[0:top_n]:\n print((str(l[0])).ljust(8), '{:.3f}%'.format((l[1]/total_games) * 100.00))",
"82 2.684%\n80 1.926%\n84 1.843%\n70 1.790%\n78 1.784%\n66 1.760%\n68 1.747%\n74 1.736%\n76 1.725%\n72 1.718%\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.