hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
sequence | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
sequence | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
sequence | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
sequence | cell_types
sequence | cell_type_groups
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d0c0fac1f859e98afc32e98afaf437fabc82de50 | 120,402 | ipynb | Jupyter Notebook | m02/m02_c06_lab/m02_c06_lab.ipynb | cristobal-montecino/mat281_portfolio | 3aa05ef63e07ab2db6956f9586c5139697af917b | [
"MIT"
] | null | null | null | m02/m02_c06_lab/m02_c06_lab.ipynb | cristobal-montecino/mat281_portfolio | 3aa05ef63e07ab2db6956f9586c5139697af917b | [
"MIT"
] | null | null | null | m02/m02_c06_lab/m02_c06_lab.ipynb | cristobal-montecino/mat281_portfolio | 3aa05ef63e07ab2db6956f9586c5139697af917b | [
"MIT"
] | null | null | null | 106.456233 | 73,948 | 0.760967 | [
[
[
"<img src=\"https://upload.wikimedia.org/wikipedia/commons/4/47/Logo_UTFSM.png\" width=\"200\" alt=\"utfsm-logo\" align=\"left\"/>\n\n# MAT281\n### Aplicaciones de la Matemática en la Ingeniería",
"_____no_output_____"
],
[
"## Módulo 02\n## Laboratorio Clase 06: Desarrollo de Algoritmos",
"_____no_output_____"
],
[
"### Instrucciones\n\n\n* Completa tus datos personales (nombre y rol USM) en siguiente celda.\n* La escala es de 0 a 4 considerando solo valores enteros.\n* Debes _pushear_ tus cambios a tu repositorio personal del curso.\n* Como respaldo, debes enviar un archivo .zip con el siguiente formato `mXX_cYY_lab_apellido_nombre.zip` a [email protected]. \n* Se evaluará:\n - Soluciones\n - Código\n - Que Binder esté bien configurado.\n - Al presionar `Kernel -> Restart Kernel and Run All Cells` deben ejecutarse todas las celdas sin error.\n* __La entrega es al final de esta clase.__",
"_____no_output_____"
],
[
"__Nombre__: Cristóbal Montecino\n\n__Rol__: 201710019-2",
"_____no_output_____"
],
[
"## Ejercicio 1 (2 ptos.): \nUtilizando los datos del Gasto Fiscal Neto de Chile, crea una nueva columna del tipo `datetime` llamada `dt_date` utilizando `anio`, `mes` y el día primero de cada mes.",
"_____no_output_____"
]
],
[
[
"import os\nimport numpy as np\nimport pandas as pd",
"_____no_output_____"
]
],
[
[
"Utilizaremos como ejemplo un dataset de gasto fiscal neto en Chile, obtenidos de una [datathon de DataCampfire](https://datacampfire.com/datathon/).",
"_____no_output_____"
]
],
[
[
"gasto_raw = pd.read_csv(os.path.join(\"data\", \"gasto_fiscal.csv\"), sep=\";\")\ngasto_raw.head()",
"_____no_output_____"
]
],
[
[
"Pasos a seguir:\n\n1. Renombra la columna `anio` por `year`.\n2. Crea la columna `month` utilizando el diccionario `es_month_dict` definido abajo. Hint: Usar un mapeo.\n3. Crea la columna `day` en que todos los registros sean igual a `1`.\n4. Crea la columna `dt_date` con la función `pd.to_datetime`. Lee la documentación!\n5. Finalmente, elimina las columnas `year`, `mes`, `month`, `day`.",
"_____no_output_____"
]
],
[
[
"es_month_dict = {\n 'enero': 1,\n 'febrero': 2,\n 'marzo': 3, \n 'abril': 4,\n 'mayo': 5,\n 'junio': 6,\n 'julio': 7,\n 'agosto': 8,\n 'septiembre': 9,\n 'octubre': 10,\n 'noviembre': 11,\n 'diciembre': 12\n}",
"_____no_output_____"
],
[
"gasto = (\n gasto_raw.rename(columns={'anio': 'year'})\n .assign(\n month=lambda x: x[\"mes\"].str.lower().map(es_month_dict),\n day=1,\n dt_date=lambda x: pd.to_datetime(x.loc[:, ['year', 'month', 'day']]),\n ).drop(columns=['year', 'mes', 'month', 'day'])\n)\ngasto.head()",
"_____no_output_____"
]
],
[
[
"## Ejercicio 2 (1 pto.)\n\nPivotea el dataframe `gasto_raw` tal que:\n\n- Los índices sean los ministerios (partidas).\n- Las columnas sean los años.\n- Cada celda sea la suma de los montos.\n- Rellenar las celdas vacías con `\"\"`.\n\n¿Cuáles son las combinaciones de Año-Ministerio que no tienen gasto?",
"_____no_output_____"
]
],
[
[
"gasto_raw['anio'].sort_values().unique()",
"_____no_output_____"
],
[
"gasto_raw.pivot_table(\n index='partida',\n columns='anio',\n values='monto',\n aggfunc='sum',\n fill_value='',\n)",
"_____no_output_____"
]
],
[
[
"__Respuesta__:\n\n|Ministerio|Años|\n|-|-|\n|Ministerio De Energía|2009|\n|Ministerio De La Mujer Y La Equidad De Género|2009-2015|\n|Ministerio Del Deporte|2009-2013|\n|Ministerio Del Medio Ambiente|2009|\n|Servicio Electoral|2009 - 2016|",
"_____no_output_____"
],
[
"## Ejercicio 3 (1 pto.)\n\nRealiza los benchmarks del archivo `benchmark_loop.py` que se encuentra en el directorio `fast_pandas`.\n\n¿Cuál forma dirías que es la más eficiente?\n\nUtiliza el comando mágico `%load` y edita de tal manera que el módulo `Benchmarker` se importe correctamente.",
"_____no_output_____"
]
],
[
[
"# %load fast_pandas/benchmark_loop.py\nfrom fast_pandas.Benchmarker import Benchmarker\n\ndef iterrows_function(df):\n for index, row in df.iterrows():\n pass\n\n\ndef itertuples_function(df):\n for row in df.itertuples():\n pass\n\n\ndef df_values(df):\n for row in df.values:\n pass\n\n\n\nparams = {\n \"df_generator\": 'pd.DataFrame(np.random.randint(1, df_size, (df_size, 4)), columns=list(\"ABCD\"))',\n \"functions_to_evaluate\": [df_values, itertuples_function, iterrows_function],\n \"title\": \"Benchmark for iterating over all rows\",\n \"user_df_size_powers\": [2, 3, 4, 5, 6],\n \"user_loop_size_powers\": [2, 2, 1, 1, 1],\n}\n\nbenchmark = Benchmarker(**params)\nbenchmark.benchmark_all()\nbenchmark.print_results()\nbenchmark.plot_results()\n",
"Benchmarking function: df_values\n\tTesting with a dataframe of size: 100\n\tResult (seconds): 3.576299999622279e-05\n\tTesting with a dataframe of size: 1000\n\tResult (seconds): 0.0002777669999977661\n\tTesting with a dataframe of size: 10000\n\tResult (seconds): 0.002646879999974772\n\tTesting with a dataframe of size: 100000\n\tResult (seconds): 0.02882777000004353\n\tTesting with a dataframe of size: 1000000\n\tResult (seconds): 0.29783320000024105\nBenchmarking function: itertuples_function\n\tTesting with a dataframe of size: 100\n\tResult (seconds): 0.0013910940000005211\n\tTesting with a dataframe of size: 1000\n\tResult (seconds): 0.0026508190000004107\n\tTesting with a dataframe of size: 10000\n\tResult (seconds): 0.017680349999955068\n\tTesting with a dataframe of size: 100000\n\tResult (seconds): 0.17078534000002038\n\tTesting with a dataframe of size: 1000000\n\tResult (seconds): 1.6431943000006868\nBenchmarking function: iterrows_function\n\tTesting with a dataframe of size: 100\n\tResult (seconds): 0.01306574900000669\n\tTesting with a dataframe of size: 1000\n\tResult (seconds): 0.13522635399999672\n\tTesting with a dataframe of size: 10000\n\tResult (seconds): 1.425159259999964\n\tTesting with a dataframe of size: 100000\n\tResult (seconds): 13.341911239999991\n\tTesting with a dataframe of size: 1000000\n\tResult (seconds): 128.60411659999954\n[3.576299999622279e-05, 0.0002777669999977661, 0.002646879999974772, 0.02882777000004353, 0.29783320000024105]\n[0.0013910940000005211, 0.0026508190000004107, 0.017680349999955068, 0.17078534000002038, 1.6431943000006868]\n[0.01306574900000669, 0.13522635399999672, 1.425159259999964, 13.341911239999991, 128.60411659999954]\n"
]
],
[
[
"__Respuesta__: df_values",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0c105ae3b0c8f9e89513c35e4fc1c89e093e290 | 15,064 | ipynb | Jupyter Notebook | examples/sdf_parser_visuals.ipynb | argenos/pcg_gazebo | 643a2669b60419ce34d700d5c75fcca7273fd137 | [
"ECL-2.0",
"Apache-2.0"
] | 40 | 2020-02-04T18:16:49.000Z | 2022-02-22T11:36:34.000Z | examples/sdf_parser_visuals.ipynb | awesomebytes/pcg_gazebo | 4f335dd460ef7c771f1df78b46a92fad4a62cedc | [
"ECL-2.0",
"Apache-2.0"
] | 75 | 2020-01-23T13:40:50.000Z | 2022-02-09T07:26:01.000Z | examples/sdf_parser_visuals.ipynb | awesomebytes/pcg_gazebo | 4f335dd460ef7c771f1df78b46a92fad4a62cedc | [
"ECL-2.0",
"Apache-2.0"
] | 18 | 2020-09-10T06:35:41.000Z | 2022-02-20T19:08:17.000Z | 29.421875 | 161 | 0.476766 | [
[
[
"# Visuals\n\nThe `<visual>` element specifies the shape of the geometry for rendering. It is a child element from `<link>` and a link can have multiple visual elements.",
"_____no_output_____"
]
],
[
[
"# Import the element creator\nfrom pcg_gazebo.parsers.sdf import create_sdf_element",
"_____no_output_____"
],
[
"# The visual element is created with an empty geometry by default\nvisual = create_sdf_element('visual')\nprint(visual)",
"<visual name=\"visual\">\n <geometry>\n <empty></empty>\n </geometry>\n</visual>\n\n"
],
[
"# To see the optional elements, use the method reset()\nvisual.reset(with_optional_elements=True)\nprint(visual)",
"<visual name=\"visual\">\n <geometry>\n <empty></empty>\n </geometry>\n <pose frame=\"\">0 0 0 0 0 0</pose>\n <material>\n <script>\n <name>default</name>\n <uri>file://media/materials/scripts/gazebo.material</uri>\n </script>\n <shader type=\"pixel\">\n <normal_map>default</normal_map>\n </shader>\n <lighting>0</lighting>\n <ambient>0 0 0 1</ambient>\n <diffuse>0 0 0 1</diffuse>\n <specular>0.1 0.1 0.1 1</specular>\n <emissive>0 0 0 1</emissive>\n </material>\n <transparency>0</transparency>\n <cast_shadows>1</cast_shadows>\n</visual>\n\n"
],
[
"# Setting the parameters for the visual element\n\n# Each visual in a link should have an unique name\nvisual.name = 'custom_visual'\n# If cast_shadows is true, the geometry will cast shadows\nvisual.cast_shadows = True\n# The transparency is a double in the range of [0, 1], 0 being opaque and 1 fully transparent\nvisual.transparency = 0.2\n# Pose of the visual geometry with respect to a frame\nvisual.pose = [0, 0.2, 0, 0, 0, 0]\nvisual.pose.frame = 'base_link'\n\nprint(visual)",
"<visual name=\"custom_visual\">\n <geometry>\n <empty></empty>\n </geometry>\n <pose frame=\"base_link\">0 0.2 0 0 0 0</pose>\n <material>\n <script>\n <name>default</name>\n <uri>file://media/materials/scripts/gazebo.material</uri>\n </script>\n <shader type=\"pixel\">\n <normal_map>default</normal_map>\n </shader>\n <lighting>0</lighting>\n <ambient>0 0 0 1</ambient>\n <diffuse>0 0 0 1</diffuse>\n <specular>0.1 0.1 0.1 1</specular>\n <emissive>0 0 0 1</emissive>\n </material>\n <transparency>0.2</transparency>\n <cast_shadows>1</cast_shadows>\n</visual>\n\n"
],
[
"# Setting different geometries to the visual element\nvisual.geometry.box = create_sdf_element('box')\nprint(visual)\nvisual.geometry.sphere = create_sdf_element('sphere')\nprint(visual)\nvisual.geometry.cylinder = create_sdf_element('cylinder')\nprint(visual)\nvisual.geometry.plane = create_sdf_element('plane')\nprint(visual)\nvisual.geometry.mesh = create_sdf_element('mesh')\nvisual.geometry.mesh.reset(with_optional_elements=True)\nprint(visual)\nvisual.geometry.image = create_sdf_element('image')\nprint(visual)\nvisual.geometry.polyline = create_sdf_element('polyline')\nprint(visual)",
"<visual name=\"custom_visual\">\n <geometry>\n <box>\n <size>0 0 0</size>\n </box>\n </geometry>\n <pose frame=\"base_link\">0 0.2 0 0 0 0</pose>\n <material>\n <script>\n <name>default</name>\n <uri>file://media/materials/scripts/gazebo.material</uri>\n </script>\n <shader type=\"pixel\">\n <normal_map>default</normal_map>\n </shader>\n <lighting>0</lighting>\n <ambient>0 0 0 1</ambient>\n <diffuse>0 0 0 1</diffuse>\n <specular>0.1 0.1 0.1 1</specular>\n <emissive>0 0 0 1</emissive>\n </material>\n <transparency>0.2</transparency>\n <cast_shadows>1</cast_shadows>\n</visual>\n\n<visual name=\"custom_visual\">\n <geometry>\n <sphere>\n <radius>0</radius>\n </sphere>\n </geometry>\n <pose frame=\"base_link\">0 0.2 0 0 0 0</pose>\n <material>\n <script>\n <name>default</name>\n <uri>file://media/materials/scripts/gazebo.material</uri>\n </script>\n <shader type=\"pixel\">\n <normal_map>default</normal_map>\n </shader>\n <lighting>0</lighting>\n <ambient>0 0 0 1</ambient>\n <diffuse>0 0 0 1</diffuse>\n <specular>0.1 0.1 0.1 1</specular>\n <emissive>0 0 0 1</emissive>\n </material>\n <transparency>0.2</transparency>\n <cast_shadows>1</cast_shadows>\n</visual>\n\n<visual name=\"custom_visual\">\n <geometry>\n <cylinder>\n <radius>0</radius>\n <length>0</length>\n </cylinder>\n </geometry>\n <pose frame=\"base_link\">0 0.2 0 0 0 0</pose>\n <material>\n <script>\n <name>default</name>\n <uri>file://media/materials/scripts/gazebo.material</uri>\n </script>\n <shader type=\"pixel\">\n <normal_map>default</normal_map>\n </shader>\n <lighting>0</lighting>\n <ambient>0 0 0 1</ambient>\n <diffuse>0 0 0 1</diffuse>\n <specular>0.1 0.1 0.1 1</specular>\n <emissive>0 0 0 1</emissive>\n </material>\n <transparency>0.2</transparency>\n <cast_shadows>1</cast_shadows>\n</visual>\n\n<visual name=\"custom_visual\">\n <geometry>\n <plane>\n <size>0 0</size>\n <normal>0 0 1</normal>\n </plane>\n </geometry>\n <pose frame=\"base_link\">0 0.2 0 0 0 0</pose>\n <material>\n <script>\n <name>default</name>\n <uri>file://media/materials/scripts/gazebo.material</uri>\n </script>\n <shader type=\"pixel\">\n <normal_map>default</normal_map>\n </shader>\n <lighting>0</lighting>\n <ambient>0 0 0 1</ambient>\n <diffuse>0 0 0 1</diffuse>\n <specular>0.1 0.1 0.1 1</specular>\n <emissive>0 0 0 1</emissive>\n </material>\n <transparency>0.2</transparency>\n <cast_shadows>1</cast_shadows>\n</visual>\n\n<visual name=\"custom_visual\">\n <geometry>\n <mesh>\n <uri></uri>\n <scale>1 1 1</scale>\n <submesh>\n <center>0</center>\n <name>none</name>\n </submesh>\n </mesh>\n </geometry>\n <pose frame=\"base_link\">0 0.2 0 0 0 0</pose>\n <material>\n <script>\n <name>default</name>\n <uri>file://media/materials/scripts/gazebo.material</uri>\n </script>\n <shader type=\"pixel\">\n <normal_map>default</normal_map>\n </shader>\n <lighting>0</lighting>\n <ambient>0 0 0 1</ambient>\n <diffuse>0 0 0 1</diffuse>\n <specular>0.1 0.1 0.1 1</specular>\n <emissive>0 0 0 1</emissive>\n </material>\n <transparency>0.2</transparency>\n <cast_shadows>1</cast_shadows>\n</visual>\n\n<visual name=\"custom_visual\">\n <geometry>\n \n </geometry>\n <pose frame=\"base_link\">0 0.2 0 0 0 0</pose>\n <material>\n <script>\n <name>default</name>\n <uri>file://media/materials/scripts/gazebo.material</uri>\n </script>\n <shader type=\"pixel\">\n <normal_map>default</normal_map>\n </shader>\n <lighting>0</lighting>\n <ambient>0 0 0 1</ambient>\n <diffuse>0 0 0 1</diffuse>\n <specular>0.1 0.1 0.1 1</specular>\n <emissive>0 0 0 1</emissive>\n </material>\n <transparency>0.2</transparency>\n <cast_shadows>1</cast_shadows>\n</visual>\n\n<visual name=\"custom_visual\">\n <geometry>\n <polyline>\n <height>1</height>\n </polyline>\n </geometry>\n <pose frame=\"base_link\">0 0.2 0 0 0 0</pose>\n <material>\n <script>\n <name>default</name>\n <uri>file://media/materials/scripts/gazebo.material</uri>\n </script>\n <shader type=\"pixel\">\n <normal_map>default</normal_map>\n </shader>\n <lighting>0</lighting>\n <ambient>0 0 0 1</ambient>\n <diffuse>0 0 0 1</diffuse>\n <specular>0.1 0.1 0.1 1</specular>\n <emissive>0 0 0 1</emissive>\n </material>\n <transparency>0.2</transparency>\n <cast_shadows>1</cast_shadows>\n</visual>\n\n"
],
[
"# Optional elements can also be created dynamically\nvisual = create_sdf_element('visual')\nprint(visual)",
"<visual name=\"visual\">\n <geometry>\n <empty></empty>\n </geometry>\n</visual>\n\n"
],
[
"visual.cast_shadows = True\nprint(visual)\n",
"<visual name=\"visual\">\n <geometry>\n <empty></empty>\n </geometry>\n <cast_shadows>1</cast_shadows>\n</visual>\n\n"
],
[
"visual.pose = [0, 0.2, 0, 0, 0, 0]\nprint(visual)",
"<visual name=\"visual\">\n <geometry>\n <empty></empty>\n </geometry>\n <cast_shadows>1</cast_shadows>\n <pose frame=\"\">0 0.2 0 0 0 0</pose>\n</visual>\n\n"
],
[
"# The geometry entity can be set with a dictionary with all the child parameters\nvisual.geometry.box = dict(size=[2, 3, 4])\nprint(visual)",
"<visual name=\"visual\">\n <geometry>\n <box>\n <size>2 3 4</size>\n </box>\n </geometry>\n <cast_shadows>1</cast_shadows>\n <pose frame=\"\">0 0.2 0 0 0 0</pose>\n</visual>\n\n"
],
[
"# The pose, as other variables, can be set using a dictionary\n# For SDF elements with no child elements, only values, the dictionary must always have a key 'value'\n# d = {value=[0, 0, 0, 0, 0, 0]}\n# If the element contains attributes, as the attribute 'frame' in the element 'pose', there should be a key\n# 'attributes' with a dictionary containing all the attributes\n# d = {value=[0, 0, 0, 0, 0, 0], attributes=dict(frame='new_frame')}\nvisual.pose = {'value': [0, 0.2, 0, 0, 0, 0], 'attributes': {'frame': 'new_frame'}}\nprint(visual)",
"<visual name=\"visual\">\n <geometry>\n <box>\n <size>2 3 4</size>\n </box>\n </geometry>\n <cast_shadows>1</cast_shadows>\n <pose frame=\"new_frame\">0 0.2 0 0 0 0</pose>\n</visual>\n\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c11f928a064ad2243a48936ca0221959f57843 | 17,574 | ipynb | Jupyter Notebook | ipy_notebook/requests.ipynb | NiroDu/python-tricks | 27d504655b1fd7417bd0e6058293209814efcc21 | [
"MIT"
] | null | null | null | ipy_notebook/requests.ipynb | NiroDu/python-tricks | 27d504655b1fd7417bd0e6058293209814efcc21 | [
"MIT"
] | null | null | null | ipy_notebook/requests.ipynb | NiroDu/python-tricks | 27d504655b1fd7417bd0e6058293209814efcc21 | [
"MIT"
] | null | null | null | 23.184697 | 150 | 0.529931 | [
[
[
"# requests",
"_____no_output_____"
],
[
"## 实例引入",
"_____no_output_____"
]
],
[
[
"import requests\n\nresponse = requests.get('https://www.baidu.com/')\nprint(type(response))\nprint(response.status_code)\nprint(type(response.text))\nprint(response.text)\nprint(response.cookies)",
"_____no_output_____"
]
],
[
[
"## 各种请求方式",
"_____no_output_____"
]
],
[
[
"import requests\nrequests.post('http://httpbin.org/post')\nrequests.put('http://httpbin.org/put')\nrequests.delete('http://httpbin.org/delete')\nrequests.head('http://httpbin.org/get')\nrequests.options('http://httpbin.org/get')",
"_____no_output_____"
]
],
[
[
"# 请求",
"_____no_output_____"
],
[
"## 基本GET请求",
"_____no_output_____"
],
[
"### 基本写法",
"_____no_output_____"
]
],
[
[
"import requests\n\nresponse = requests.get('http://httpbin.org/get')\nprint(response.text)",
"_____no_output_____"
]
],
[
[
"### 带参数GET请求",
"_____no_output_____"
]
],
[
[
"import requests\nresponse = requests.get(\"http://httpbin.org/get?name=germey&age=22\")\nprint(response.text)",
"_____no_output_____"
],
[
"import requests\n\ndata = {\n 'name': 'germey',\n 'age': 22\n}\nresponse = requests.get(\"http://httpbin.org/get\", params=data)\nprint(response.text)",
"_____no_output_____"
]
],
[
[
"### 解析json",
"_____no_output_____"
]
],
[
[
"import requests\nimport json\n\nresponse = requests.get(\"http://httpbin.org/get\")\nprint(type(response.text))\nprint(response.json())\nprint(json.loads(response.text))\nprint(type(response.json()))",
"_____no_output_____"
]
],
[
[
"### 获取二进制数据",
"_____no_output_____"
]
],
[
[
"import requests\n\nresponse = requests.get(\"https://github.com/favicon.ico\")\nprint(type(response.text), type(response.content))\nprint(response.text)\nprint(response.content)",
"_____no_output_____"
],
[
"import requests\n\nresponse = requests.get(\"https://github.com/favicon.ico\")\nwith open('favicon.ico', 'wb') as f:\n f.write(response.content)\n f.close()",
"_____no_output_____"
]
],
[
[
"### 添加headers",
"_____no_output_____"
]
],
[
[
"import requests\n\nresponse = requests.get(\"https://www.zhihu.com/explore\")\nprint(response.text)",
"_____no_output_____"
],
[
"import requests\n\nheaders = {\n 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36'\n}\nresponse = requests.get(\"https://www.zhihu.com/explore\", headers=headers)\nprint(response.text)",
"_____no_output_____"
]
],
[
[
"## 基本POST请求",
"_____no_output_____"
]
],
[
[
"import requests\n\ndata = {'name': 'germey', 'age': '22'}\nresponse = requests.post(\"http://httpbin.org/post\", data=data)\nprint(response.text)",
"_____no_output_____"
],
[
"import requests\n\ndata = {'name': 'germey', 'age': '22'}\nheaders = {\n 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36'\n}\nresponse = requests.post(\"http://httpbin.org/post\", data=data, headers=headers)\nprint(response.json())",
"_____no_output_____"
]
],
[
[
"# 响应",
"_____no_output_____"
],
[
"## reponse属性",
"_____no_output_____"
]
],
[
[
"import requests\n\nresponse = requests.get('http://www.jianshu.com')\nprint(type(response.status_code), response.status_code)\nprint(type(response.headers), response.headers)\nprint(type(response.cookies), response.cookies)\nprint(type(response.url), response.url)\nprint(type(response.history), response.history)",
"_____no_output_____"
]
],
[
[
"## 状态码判断",
"_____no_output_____"
]
],
[
[
"import requests\n\nresponse = requests.get('http://www.jianshu.com/hello.html')\nexit() if not response.status_code == requests.codes.not_found else print('404 Not Found')",
"_____no_output_____"
],
[
"import requests\n\nresponse = requests.get('http://www.jianshu.com')\nexit() if not response.status_code == 200 else print('Request Successfully')",
"_____no_output_____"
]
],
[
[
"下面的状态码,可以直接用string去表示,例如404可以是requests.codes.not_found,200可以是requests.codes.ok",
"_____no_output_____"
]
],
[
[
"100: ('continue',),\n101: ('switching_protocols',),\n102: ('processing',),\n103: ('checkpoint',),\n122: ('uri_too_long', 'request_uri_too_long'),\n200: ('ok', 'okay', 'all_ok', 'all_okay', 'all_good', '\\\\o/', '✓'),\n201: ('created',),\n202: ('accepted',),\n203: ('non_authoritative_info', 'non_authoritative_information'),\n204: ('no_content',),\n205: ('reset_content', 'reset'),\n206: ('partial_content', 'partial'),\n207: ('multi_status', 'multiple_status', 'multi_stati', 'multiple_stati'),\n208: ('already_reported',),\n226: ('im_used',),\n\n# Redirection.\n300: ('multiple_choices',),\n301: ('moved_permanently', 'moved', '\\\\o-'),\n302: ('found',),\n303: ('see_other', 'other'),\n304: ('not_modified',),\n305: ('use_proxy',),\n306: ('switch_proxy',),\n307: ('temporary_redirect', 'temporary_moved', 'temporary'),\n308: ('permanent_redirect',\n 'resume_incomplete', 'resume',), # These 2 to be removed in 3.0\n\n# Client Error.\n400: ('bad_request', 'bad'),\n401: ('unauthorized',),\n402: ('payment_required', 'payment'),\n403: ('forbidden',),\n404: ('not_found', '-o-'),\n405: ('method_not_allowed', 'not_allowed'),\n406: ('not_acceptable',),\n407: ('proxy_authentication_required', 'proxy_auth', 'proxy_authentication'),\n408: ('request_timeout', 'timeout'),\n409: ('conflict',),\n410: ('gone',),\n411: ('length_required',),\n412: ('precondition_failed', 'precondition'),\n413: ('request_entity_too_large',),\n414: ('request_uri_too_large',),\n415: ('unsupported_media_type', 'unsupported_media', 'media_type'),\n416: ('requested_range_not_satisfiable', 'requested_range', 'range_not_satisfiable'),\n417: ('expectation_failed',),\n418: ('im_a_teapot', 'teapot', 'i_am_a_teapot'),\n421: ('misdirected_request',),\n422: ('unprocessable_entity', 'unprocessable'),\n423: ('locked',),\n424: ('failed_dependency', 'dependency'),\n425: ('unordered_collection', 'unordered'),\n426: ('upgrade_required', 'upgrade'),\n428: ('precondition_required', 'precondition'),\n429: ('too_many_requests', 'too_many'),\n431: ('header_fields_too_large', 'fields_too_large'),\n444: ('no_response', 'none'),\n449: ('retry_with', 'retry'),\n450: ('blocked_by_windows_parental_controls', 'parental_controls'),\n451: ('unavailable_for_legal_reasons', 'legal_reasons'),\n499: ('client_closed_request',),\n\n# Server Error.\n500: ('internal_server_error', 'server_error', '/o\\\\', '✗'),\n501: ('not_implemented',),\n502: ('bad_gateway',),\n503: ('service_unavailable', 'unavailable'),\n504: ('gateway_timeout',),\n505: ('http_version_not_supported', 'http_version'),\n506: ('variant_also_negotiates',),\n507: ('insufficient_storage',),\n509: ('bandwidth_limit_exceeded', 'bandwidth'),\n510: ('not_extended',),\n511: ('network_authentication_required', 'network_auth', 'network_authentication'),",
"_____no_output_____"
]
],
[
[
"# 高级操作",
"_____no_output_____"
],
[
"## 文件上传",
"_____no_output_____"
]
],
[
[
"import requests\n\nfiles = {'file': open('favicon.ico', 'rb')}\nresponse = requests.post(\"http://httpbin.org/post\", files=files)\nprint(response.text)",
"_____no_output_____"
]
],
[
[
"## 获取cookie",
"_____no_output_____"
]
],
[
[
"import requests\n\nresponse = requests.get(\"https://www.baidu.com\")\nprint(response.cookies)\nfor key, value in response.cookies.items():\n print(key + '=' + value)",
"_____no_output_____"
]
],
[
[
"## 会话维持",
"_____no_output_____"
],
[
"模拟登录",
"_____no_output_____"
]
],
[
[
"使用两次requests.get() 相当于打开了两次浏览器窗口,下面的例子第一次设置了cookies,第二次又重新打开了一个新对象去取cookies,所以取不到值。",
"_____no_output_____"
],
[
"import requests\n\nrequests.get('http://httpbin.org/cookies/set/number/123456789')\nresponse = requests.get('http://httpbin.org/cookies')\nprint(response.text)",
"_____no_output_____"
],
[
"使用requests.Session()创建出的对象,用这个对象发起两次get请求,可以维持同一个对象通信。",
"_____no_output_____"
],
[
"import requests\n\ns = requests.Session()\ns.get('http://httpbin.org/cookies/set/number/123456789')\nresponse = s.get('http://httpbin.org/cookies')\nprint(response.text)",
"_____no_output_____"
]
],
[
[
"## 证书验证",
"_____no_output_____"
]
],
[
[
"import requests\n\nresponse = requests.get('https://www.12306.cn')\nprint(response.status_code)",
"_____no_output_____"
],
[
"import requests\n# from requests.packages import urllib3\n# urllib3.disable_warnings()\nresponse = requests.get('https://www.12306.cn', verify=False)\nprint(response.status_code)",
"_____no_output_____"
],
[
"import requests\n\nresponse = requests.get('https://www.12306.cn', cert=('/path/server.crt', '/path/key'))\nprint(response.status_code)",
"_____no_output_____"
]
],
[
[
"## 代理设置",
"_____no_output_____"
]
],
[
[
"import requests\n\nproxies = {\n \"http\": \"http://127.0.0.1:9743\",\n \"https\": \"https://127.0.0.1:9743\",\n}\n\nresponse = requests.get(\"https://www.taobao.com\", proxies=proxies)\nprint(response.status_code)",
"_____no_output_____"
],
[
"import requests\n\nproxies = {\n \"http\": \"http://user:[email protected]:9743/\",\n}\nresponse = requests.get(\"https://www.taobao.com\", proxies=proxies)\nprint(response.status_code)",
"_____no_output_____"
],
[
"pip3 install 'requests[socks]'",
"_____no_output_____"
],
[
"import requests\n\nproxies = {\n 'http': 'socks5://127.0.0.1:9742',\n 'https': 'socks5://127.0.0.1:9742'\n}\nresponse = requests.get(\"https://www.taobao.com\", proxies=proxies)\nprint(response.status_code)",
"_____no_output_____"
]
],
[
[
"## 超时设置",
"_____no_output_____"
]
],
[
[
"import requests\nfrom requests.exceptions import ReadTimeout\ntry:\n response = requests.get(\"http://httpbin.org/get\", timeout = 0.5)\n print(response.status_code)\nexcept ReadTimeout:\n print('Timeout')",
"_____no_output_____"
]
],
[
[
"## 认证设置",
"_____no_output_____"
]
],
[
[
"当网站需要输入用户名和密码才可以访问的时候,下面两种方式都可以模拟账号密码为参数的请求。",
"_____no_output_____"
],
[
"import requests\nfrom requests.auth import HTTPBasicAuth\n\nr = requests.get('http://120.27.34.24:9001', auth=HTTPBasicAuth('user', '123'))\nprint(r.status_code)",
"_____no_output_____"
],
[
"import requests\n\nr = requests.get('http://120.27.34.24:9001', auth=('user', '123'))\nprint(r.status_code)",
"_____no_output_____"
]
],
[
[
"## 异常处理",
"_____no_output_____"
],
[
"[requests exceptions](https://requests.kennethreitz.org/en/master/api/#exceptions)",
"_____no_output_____"
]
],
[
[
"import requests\nfrom requests.exceptions import ReadTimeout, ConnectionError, RequestException\ntry:\n response = requests.get(\"http://httpbin.org/get\", timeout = 0.5)\n print(response.status_code)\nexcept ReadTimeout:\n print('Timeout')\nexcept ConnectionError:\n print('Connection error')\nexcept RequestException:\n print('Error')",
"Connection error\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
d0c154706af3366eab71006468ae128197be6762 | 18,278 | ipynb | Jupyter Notebook | notebooks/A Very Extensive Freesound Exploratory Analysis.ipynb | dhiraa/sabdha | f428418962dcc76f49e0a451ffc0545fda9b6b59 | [
"Apache-2.0"
] | 4 | 2018-10-26T07:00:34.000Z | 2020-10-07T01:03:08.000Z | notebooks/A Very Extensive Freesound Exploratory Analysis.ipynb | dhiraa/sabdha | f428418962dcc76f49e0a451ffc0545fda9b6b59 | [
"Apache-2.0"
] | null | null | null | notebooks/A Very Extensive Freesound Exploratory Analysis.ipynb | dhiraa/sabdha | f428418962dcc76f49e0a451ffc0545fda9b6b59 | [
"Apache-2.0"
] | 1 | 2018-10-26T07:00:38.000Z | 2018-10-26T07:00:38.000Z | 28.966719 | 437 | 0.587263 | [
[
[
"# More To Come. Stay Tuned. !!\nIf there are any suggestions/changes you would like to see in the Kernel please let me know :). Appreciate every ounce of help!\n\n**This notebook will always be a work in progress**. Please leave any comments about further improvements to the notebook! Any feedback or constructive criticism is greatly appreciated!. **If you like it or it helps you , you can upvote and/or leave a comment :).**|\n",
"_____no_output_____"
]
],
[
[
"import numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline \n\nimport IPython.display as ipd # To play sound in the notebook\nfrom tqdm import tqdm_notebook\nimport wave\nfrom scipy.io import wavfile\nSAMPLE_RATE = 44100\n\nimport seaborn as sns # for making plots with seaborn\ncolor = sns.color_palette()\nimport plotly.offline as py\npy.init_notebook_mode(connected=True)\nimport plotly.graph_objs as go\nimport plotly.offline as offline\noffline.init_notebook_mode()\nimport plotly.tools as tls\n# Math\nimport numpy as np\nfrom scipy.fftpack import fft\nfrom scipy import signal\nfrom scipy.io import wavfile\nimport librosa",
"_____no_output_____"
],
[
"import os\nprint(os.listdir(\"../input\"))",
"_____no_output_____"
],
[
"INPUT_LIB = '../input/'\naudio_train_files = os.listdir('../input/audio_train')\naudio_test_files = os.listdir('../input/audio_test')\ntrain = pd.read_csv('../input/train.csv')\nsubmission = pd.read_csv(\"../input/sample_submission.csv\", index_col='fname')\ntrain_audio_path = '../input/audio_train/'\nfilename = '/001ca53d.wav' # Hi-hat\nsample_rate, samples = wavfile.read(str(train_audio_path) + filename)\n#sample_rate = 16000",
"_____no_output_____"
],
[
"print(samples)",
"_____no_output_____"
],
[
"print(\"Size of training data\",train.shape)",
"_____no_output_____"
],
[
"train.head()",
"_____no_output_____"
],
[
"submission.head()",
"_____no_output_____"
],
[
"def clean_filename(fname, string): \n file_name = fname.split('/')[1]\n if file_name[:2] == '__': \n file_name = string + file_name\n return file_name\n\ndef load_wav_file(name, path):\n _, b = wavfile.read(path + name)\n assert _ == SAMPLE_RATE\n return b",
"_____no_output_____"
],
[
"train_data = pd.DataFrame({'file_name' : train['fname'],\n 'target' : train['label']}) \ntrain_data['time_series'] = train_data['file_name'].apply(load_wav_file, \n path=INPUT_LIB + 'audio_train/') \ntrain_data['nframes'] = train_data['time_series'].apply(len) ",
"_____no_output_____"
],
[
"train_data.head()",
"_____no_output_____"
],
[
"print(\"Size of training data after some preprocessing : \",train_data.shape)",
"_____no_output_____"
],
[
"# missing data in training data set\ntotal = train_data.isnull().sum().sort_values(ascending = False)\npercent = (train_data.isnull().sum()/train_data.isnull().count()).sort_values(ascending = False)\nmissing_train_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])\nmissing_train_data.head()",
"_____no_output_____"
]
],
[
[
"There is no missing data in training dataset",
"_____no_output_____"
],
[
"# Manually verified Audio",
"_____no_output_____"
]
],
[
[
"temp = train['manually_verified'].value_counts()\nlabels = temp.index\nsizes = (temp / temp.sum())*100\ntrace = go.Pie(labels=labels, values=sizes, hoverinfo='label+percent')\nlayout = go.Layout(title='Manually varification of labels(0 - No, 1 - Yes)')\ndata = [trace]\nfig = go.Figure(data=data, layout=layout)\npy.iplot(fig)",
"_____no_output_____"
]
],
[
[
"* Approximately 40 % labels are manually varified.",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(12,8))\nsns.distplot(train_data.nframes.values, bins=50, kde=False)\nplt.xlabel('nframes', fontsize=12)\nplt.title(\"Histogram of #frames\")\nplt.show()",
"_____no_output_____"
],
[
"plt.figure(figsize=(17,8))\nboxplot = sns.boxplot(x=\"target\", y=\"nframes\", data=train_data)\nboxplot.set(xlabel='', ylabel='')\nplt.title('Distribution of audio frames, per label', fontsize=17)\nplt.xticks(rotation=80, fontsize=17)\nplt.yticks(fontsize=17)\nplt.xlabel('Label name')\nplt.ylabel('nframes')\nplt.show()",
"_____no_output_____"
],
[
"print(\"Total number of labels in training data : \",len(train_data['target'].value_counts()))\nprint(\"Labels are : \", train_data['target'].unique())\nplt.figure(figsize=(15,8))\naudio_type = train_data['target'].value_counts().head(30)\nsns.barplot(audio_type.values, audio_type.index)\nfor i, v in enumerate(audio_type.values):\n plt.text(0.8,i,v,color='k',fontsize=12)\nplt.xticks(rotation='vertical')\nplt.xlabel('Frequency')\nplt.ylabel('Label Name')\nplt.title(\"Top 30 labels with their frequencies in training data\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Total number of labels are 41",
"_____no_output_____"
]
],
[
[
"temp = train_data.sort_values(by='target')\ntemp.head()",
"_____no_output_____"
]
],
[
[
"## Now look at some labels waveform :\n 1. Acoustic_guitar\n 2. Applause\n 3. Bark",
"_____no_output_____"
],
[
"## 1. Acoustic_guitar",
"_____no_output_____"
]
],
[
[
"print(\"Acoustic_guitar : \")\nfig, ax = plt.subplots(10, 4, figsize = (12, 16))\nfor i in range(40):\n ax[i//4, i%4].plot(temp['time_series'][i])\n ax[i//4, i%4].set_title(temp['file_name'][i][:-4])\n ax[i//4, i%4].get_xaxis().set_ticks([])\nfig.savefig(\"AudioWaveform\", dpi=900) ",
"_____no_output_____"
]
],
[
[
"## 2. Applause",
"_____no_output_____"
]
],
[
[
"print(\"Applause : \")\nfig, ax = plt.subplots(10, 4, figsize = (12, 16))\nfor i in range(40):\n ax[i//4, i%4].plot(temp['time_series'][i+300])\n ax[i//4, i%4].set_title(temp['file_name'][i+300][:-4])\n ax[i//4, i%4].get_xaxis().set_ticks([])",
"_____no_output_____"
]
],
[
[
"## 3. Bark",
"_____no_output_____"
]
],
[
[
"print(\"Bark : \")\nfig, ax = plt.subplots(10, 4, figsize = (12, 16))\nfor i in range(40):\n ax[i//4, i%4].plot(temp['time_series'][i+600])\n ax[i//4, i%4].set_title(temp['file_name'][i+600][:-4])\n ax[i//4, i%4].get_xaxis().set_ticks([])",
"_____no_output_____"
],
[
"from wordcloud import WordCloud\nwordcloud = WordCloud(max_font_size=50, width=600, height=300).generate(' '.join(train_data.target))\nplt.figure(figsize=(15,8))\nplt.imshow(wordcloud)\nplt.title(\"Wordcloud for Labels\", fontsize=35)\nplt.axis(\"off\")\nplt.show() \n#fig.savefig(\"LabelsWordCloud\", dpi=900)",
"_____no_output_____"
]
],
[
[
"# Spectrogram",
"_____no_output_____"
]
],
[
[
"def log_specgram(audio, sample_rate, window_size=20,\n step_size=10, eps=1e-10):\n nperseg = int(round(window_size * sample_rate / 1e3))\n noverlap = int(round(step_size * sample_rate / 1e3))\n freqs, times, spec = signal.spectrogram(audio,\n fs=sample_rate,\n window='hann',\n nperseg=nperseg,\n noverlap=noverlap,\n detrend=False)\n return freqs, times, np.log(spec.T.astype(np.float32) + eps)",
"_____no_output_____"
],
[
"freqs, times, spectrogram = log_specgram(samples, sample_rate)\n\nfig = plt.figure(figsize=(18, 8))\nax2 = fig.add_subplot(211)\nax2.imshow(spectrogram.T, aspect='auto', origin='lower', \n extent=[times.min(), times.max(), freqs.min(), freqs.max()])\nax2.set_yticks(freqs[::40])\nax2.set_xticks(times[::40])\nax2.set_title('Spectrogram of Hi-hat ' + filename)\nax2.set_ylabel('Freqs in Hz')\nax2.set_xlabel('Seconds')",
"_____no_output_____"
]
],
[
[
"# Specgtrogram of \"Hi-Hat\" in 3d",
"_____no_output_____"
],
[
"If we use spectrogram as an input features for NN, we have to remember to normalize features.",
"_____no_output_____"
]
],
[
[
"mean = np.mean(spectrogram, axis=0)\nstd = np.std(spectrogram, axis=0)\nspectrogram = (spectrogram - mean) / std",
"_____no_output_____"
],
[
"data = [go.Surface(z=spectrogram.T)]\nlayout = go.Layout(\n title='Specgtrogram of \"Hi-Hat\" in 3d',\n scene = dict(\n yaxis = dict(title='Frequencies', range=freqs),\n xaxis = dict(title='Time', range=times),\n zaxis = dict(title='Log amplitude'),\n ),\n)\nfig = go.Figure(data=data, layout=layout)\npy.iplot(fig)",
"_____no_output_____"
]
],
[
[
"# More To Come. Stayed Tuned !!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d0c157ec78d983c4a6d557fa19df85c42328a786 | 220,425 | ipynb | Jupyter Notebook | notebooks/old notebooks/demintro02.ipynb | snowdj/CompEcon-python | 883ac75750800e2792218a7b13f97e681498a389 | [
"MIT"
] | 23 | 2016-12-14T13:21:27.000Z | 2020-08-23T21:04:34.000Z | notebooks/old notebooks/demintro02.ipynb | snowdj/CompEcon-python | 883ac75750800e2792218a7b13f97e681498a389 | [
"MIT"
] | 1 | 2017-09-10T04:48:54.000Z | 2018-03-31T01:36:46.000Z | notebooks/old notebooks/demintro02.ipynb | snowdj/CompEcon-python | 883ac75750800e2792218a7b13f97e681498a389 | [
"MIT"
] | 13 | 2017-02-25T08:10:38.000Z | 2020-05-15T09:49:16.000Z | 82.494386 | 48,033 | 0.718027 | [
[
[
"### DemIntro02:\n# Rational Expectations Agricultural Market Model",
"_____no_output_____"
],
[
"#### Preliminary task:\nLoad required modules",
"_____no_output_____"
]
],
[
[
"from compecon.quad import qnwlogn\nfrom compecon.tools import discmoments\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set_style('dark')\n%matplotlib notebook",
"_____no_output_____"
]
],
[
[
"Generate yield distribution",
"_____no_output_____"
]
],
[
[
"sigma2 = 0.2 ** 2\ny, w = qnwlogn(25, -0.5 * sigma2, sigma2)",
"_____no_output_____"
]
],
[
[
"## Compute rational expectations equilibrium using function iteration, iterating on acreage planted",
"_____no_output_____"
]
],
[
[
"A = lambda aa, pp: 0.5 + 0.5 * np.dot(w, np.maximum(1.5 - 0.5 * aa * y, pp))\n\nptarg = 1\na = 1\nprint('{:^6} {:^10} {:^10}\\n{}'.format('iter', 'a', \"|a' - a|\",'-' * 27))\nfor it in range(50):\n aold = a\n a = A(a, ptarg)\n print('{:^6} {:^10.4f} {:^10.1e}'.format(it, a, np.linalg.norm(a - aold)))\n if np.linalg.norm(a - aold) < 1.e-8:\n break",
" iter a |a' - a| \n---------------------------\n 0 1.0198 2.0e-02 \n 1 1.0171 2.7e-03 \n 2 1.0175 3.7e-04 \n 3 1.0174 5.0e-05 \n 4 1.0174 6.8e-06 \n 5 1.0174 9.3e-07 \n 6 1.0174 1.3e-07 \n 7 1.0174 1.7e-08 \n 8 1.0174 2.3e-09 \n"
]
],
[
[
"Intermediate outputs",
"_____no_output_____"
]
],
[
[
"q = a * y # quantity produced in each state\np = 1.5 - 0.5 * a * y # market price in each state\nf = np.maximum(p, ptarg) # farm price in each state\nr = f * q # farm revenue in each state\ng = (f - p) * q #government expenditures",
"_____no_output_____"
]
],
[
[
"Print results",
"_____no_output_____"
]
],
[
[
"varnames = ['Market Price', 'Farm Price', 'Farm Revenue', 'Government Expenditures']\nxavg, xstd = discmoments(w, np.vstack((p, f, r, g)))\nprint('\\n{:^24} {:^8} {:^8}\\n{}'.format('Variable', 'Expect', 'Std Dev','-'*42))\nfor varname, av, sd in zip(varnames, xavg, xstd):\n print('{:24} {:8.4f} {:8.4f}'.format(varname, av, sd))",
"\n Variable Expect Std Dev \n------------------------------------------\nMarket Price 0.9913 0.1028\nFarm Price 1.0348 0.0506\nFarm Revenue 1.0447 0.1773\nGovernment Expenditures 0.0573 0.1038\n"
]
],
[
[
"## Generate fixed-point mapping",
"_____no_output_____"
]
],
[
[
"aeq = a\na = np.linspace(0, 2, 100)\ng = np.array([A(k, ptarg) for k in a])",
"_____no_output_____"
]
],
[
[
"Graph rational expectations equilibrium",
"_____no_output_____"
]
],
[
[
"fig1 = plt.figure(figsize=[6, 6])\nax = fig1.add_subplot(111, title='Rational expectations equilibrium', aspect=1,\n xlabel='Acreage Planted', xticks=[0, aeq, 2], xticklabels=['0', '$a^{*}$', '2'],\n ylabel='Rational Acreage Planted', yticks=[0, aeq, 2],yticklabels=['0', '$a^{*}$', '2'])\n\nax.plot(a, g, 'b', linewidth=4)\nax.plot(a, a, ':', color='grey', linewidth=2)\nax.plot([0, aeq, aeq], [aeq, aeq, 0], 'r--', linewidth=3)\nax.plot([aeq], [aeq], 'ro', markersize=12)\nax.text(0.05, 0, '45${}^o$', color='grey')\nax.text(1.85, aeq - 0.15,'$g(a)$', color='blue')\nfig1.show()",
"_____no_output_____"
]
],
[
[
"## Compute rational expectations equilibrium as a function of the target price",
"_____no_output_____"
]
],
[
[
"nplot = 50\nptarg = np.linspace(0, 2, nplot)\na = 1\nEp, Ef, Er, Eg, Sp, Sf, Sr, Sg = (np.empty(nplot) for k in range(8))\n\nfor ip in range(nplot):\n for it in range(50):\n aold = a\n a = A(a, ptarg[ip])\n if np.linalg.norm((a - aold) < 1.e-10):\n break\n\n q = a * y # quantity produced\n p = 1.5 - 0.5 * a * y # market price\n f = np.maximum(p, ptarg[ip]) # farm price\n r = f * q # farm revenue\n g = (f - p) * q # government expenditures\n\n xavg, xstd = discmoments(w, np.vstack((p, f, r, g)))\n Ep[ip], Ef[ip], Er[ip], Eg[ip] = tuple(xavg)\n Sp[ip], Sf[ip], Sr[ip], Sg[ip] = tuple(xstd)\n\n\nzeroline = lambda y: plt.axhline(y[0], linestyle=':', color='gray', hold=True)",
"_____no_output_____"
]
],
[
[
"Graph expected prices vs target price",
"_____no_output_____"
]
],
[
[
"fig2 = plt.figure(figsize=[8, 6])\nax1 = fig2.add_subplot(121, title='Expected price',\n xlabel='Target price', xticks=[0, 1, 2],\n ylabel='Expectation', yticks=[0.5, 1, 1.5, 2], ylim=[0.5, 2.0])\nzeroline(Ep)\nax1.plot(ptarg, Ep, linewidth=4, label='Market Price')\nax1.plot(ptarg, Ef, linewidth=4, label='Farm Price')\nax1.legend(loc='upper left')",
"_____no_output_____"
]
],
[
[
"Graph expected prices vs target price",
"_____no_output_____"
]
],
[
[
"ax2 = fig2.add_subplot(122, title='Price variabilities',\n xlabel='Target price', xticks=[0, 1, 2],\n ylabel='Standard deviation', yticks=[0, 0.1, 0.2]) #plt.ylim(0.5, 2.0)\nzeroline(Sf)\nax2.plot(ptarg, Sp, linewidth=4, label='Market Price')\nax2.plot(ptarg, Sf, linewidth=4, label='Farm Price')\nax2.legend(loc='upper left')\nfig2.show()",
"_____no_output_____"
]
],
[
[
"Graph expected farm revenue vs target price",
"_____no_output_____"
]
],
[
[
"fig3 = plt.figure(figsize=[12, 6])\nax1 = fig3.add_subplot(131, title='Expected revenue',\n xlabel='Target price', xticks=[0, 1, 2],\n ylabel='Expectation', yticks=[1, 2, 3], ylim=[0.8, 3.0])\nzeroline(Er)\nax1.plot(ptarg, Er, linewidth=4)",
"_____no_output_____"
]
],
[
[
"Graph standard deviation of farm revenue vs target price",
"_____no_output_____"
]
],
[
[
"ax2 = fig3.add_subplot(132, title='Farm Revenue Variability',\n xlabel='Target price', xticks=[0, 1, 2],\n ylabel='Standard deviation', yticks=[0, 0.2, 0.4])\nzeroline(Sr)\nax2.plot(ptarg, Sr, linewidth=4)",
"_____no_output_____"
]
],
[
[
"Graph expected government expenditures vs target price",
"_____no_output_____"
]
],
[
[
"ax3 = fig3.add_subplot(133, title='Expected Government Expenditures',\n xlabel='Target price', xticks=[0, 1, 2],\n ylabel='Expectation', yticks=[0, 1, 2], ylim=[-0.05, 2.0])\nzeroline(Eg)\nax3.plot(ptarg, Eg, linewidth=4)\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c159b6aa058f830e8d940605c6d12eb2859d45 | 531,065 | ipynb | Jupyter Notebook | task5/Stock_LSTM[final].ipynb | raviyadav44/Brillect-PML-Internship | c787c77a86bf3e5e4cd4b279265f6991421a896f | [
"Apache-2.0"
] | null | null | null | task5/Stock_LSTM[final].ipynb | raviyadav44/Brillect-PML-Internship | c787c77a86bf3e5e4cd4b279265f6991421a896f | [
"Apache-2.0"
] | null | null | null | task5/Stock_LSTM[final].ipynb | raviyadav44/Brillect-PML-Internship | c787c77a86bf3e5e4cd4b279265f6991421a896f | [
"Apache-2.0"
] | null | null | null | 121.497369 | 107,218 | 0.752343 | [
[
[
"from numpy import array\nimport datetime as dt\nfrom matplotlib import pyplot as plt\nfrom sklearn import model_selection\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.model_selection import train_test_split\nimport numpy as np\nimport pandas as pd\nfrom sklearn.preprocessing import MinMaxScaler\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.layers import LSTM\nfrom keras.layers import Dropout\nimport math",
"_____no_output_____"
],
[
"\ndf=pd.read_csv(\"/content/drive/MyDrive/stocks.csv\" )\ndf[\"Date\"] = pd.to_datetime(df['Date'],format=\"%Y-%m-%d\")",
"_____no_output_____"
],
[
"df.shape",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.tail()",
"_____no_output_____"
],
[
"start = dt.datetime(2012,1,12)\nend = dt.datetime(2020,8,5)\nGoog=df['GOOG']",
"_____no_output_____"
],
[
"Goog.head()",
"_____no_output_____"
],
[
"Goog.isna().sum()",
"_____no_output_____"
]
],
[
[
"#'IBM' Stock analysis for monthly , weekly as well as daily stock prices.",
"_____no_output_____"
]
],
[
[
"df['IBM_week'] = df.IBM.rolling(7).mean().shift()\ndf['IBM_month'] = df.IBM.rolling(30).mean()",
"_____no_output_____"
],
[
"plt.figure(figsize=(15,10))\nplt.grid(True)\nplt.plot(df['IBM_week'],label='Weekly')\nplt.plot(df['IBM_month'], label='Monthly')\nplt.plot(df['IBM'],label='IBM')\nplt.legend(loc=2)",
"_____no_output_____"
]
],
[
[
"#APPL stocks",
"_____no_output_____"
]
],
[
[
"df['AAPL_week'] = df.IBM.rolling(7).mean()\ndf['AAPL_month'] = df.IBM.rolling(30).mean()\nplt.figure(figsize=(15,10))\nplt.grid(True)\nplt.plot(df['AAPL_week'],label='Weekly')\nplt.plot(df['AAPL_month'], label='Monthly')\nplt.plot(df['AAPL'],label='AAPL')\nplt.legend(loc=2)",
"_____no_output_____"
],
[
"from statsmodels.tsa.stattools import adfuller\n\ndef adf_test(dataset):\n dftest = adfuller(dataset, autolag = 'AIC')\n print(\"1. ADF : \",dftest[0])\n print(\"2. P-Value : \", dftest[1])\n print(\"3. Num Of Lags : \", dftest[2])\n print(\"4. Num Of Observations Used For ADF Regression and Critical Values Calculation :\", dftest[3])\n print(\"5. Critical Values :\")\n for key, val in dftest[4].items():\n print(\"\\t\",key, \": \", val)",
"/usr/local/lib/python3.7/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.\n import pandas.util.testing as tm\n"
],
[
"adf_test(Goog)",
"1. ADF : -0.2116028523414724\n2. P-Value : 0.9371243822985593\n3. Num Of Lags : 22\n4. Num Of Observations Used For ADF Regression and Critical Values Calculation : 2136\n5. Critical Values :\n\t 1% : -3.4334151573147094\n\t 5% : -2.8628940688135\n\t 10% : -2.5674908403908736\n"
],
[
"#define function for kpss test\nfrom statsmodels.tsa.stattools import kpss\n#define KPSS\ndef kpss_test(timeseries):\n print ('Results of KPSS Test:')\n kpsstest = kpss(timeseries, regression='c')\n kpss_output = pd.Series(kpsstest[0:3], index=['Test Statistic','p-value','Lags Used'])\n for key,value in kpsstest[3].items():\n kpss_output['Critical Value (%s)'%key] = value\n print (kpss_output)",
"_____no_output_____"
],
[
"kpss_test(Goog)",
"Results of KPSS Test:\nTest Statistic 7.8106\np-value 0.0100\nLags Used 26.0000\nCritical Value (10%) 0.3470\nCritical Value (5%) 0.4630\nCritical Value (2.5%) 0.5740\nCritical Value (1%) 0.7390\ndtype: float64\n"
],
[
"goog=Goog.values",
"_____no_output_____"
],
[
"goog",
"_____no_output_____"
],
[
"#dataframe with only apple col\ndata = df.filter(['GOOG'])\n#convert df into numpy arr\ndataset = data.values\n\ntraining_data_len = math.ceil(len(dataset)*0.8)\ntraining_data_len",
"_____no_output_____"
],
[
"#scaling\nscaler = MinMaxScaler(feature_range=(0,1))\n#transforming values to 0&1 before it is passed into the nw\nscaled_data = scaler.fit_transform(dataset)\nscaled_data=scaled_data.flatten()\nscaled_data",
"_____no_output_____"
],
[
"# split a univariate sequence\ndef split_sequence(sequence, n_steps):\n\tX, y = list(), list()\n\tfor i in range(len(sequence)):\n\t\t# find the end of this pattern\n\t\tend_ix = i + n_steps\n\t\t# check if we are beyond the sequence\n\t\tif end_ix > len(sequence)-1:\n\t\t\tbreak\n\t\t# gather input and output parts of the pattern\n\t\tseq_x, seq_y = sequence[i:end_ix], sequence[end_ix]\n\t\tX.append(seq_x)\n\t\ty.append(seq_y)\n\treturn array(X), array(y)\n\nn_steps=5 \nX, y = split_sequence(scaled_data, n_steps)\n# summarize the data\nfor i in range(len(X)):\n\tprint(X[i], y[i])",
"[0.02725811 0.02546253 0.0268488 0.02852082 0.03109256] 0.010402803300525398\n[0.02546253 0.0268488 0.02852082 0.03109256 0.0104028 ] 0.010221307642083621\n[0.0268488 0.02852082 0.03109256 0.0104028 0.01022131] 0.00844891123182312\n[0.02852082 0.03109256 0.0104028 0.01022131 0.00844891] 0.004031391819500463\n[0.03109256 0.0104028 0.01022131 0.00844891 0.00403139] 0.00349464045020606\n[0.0104028 0.01022131 0.00844891 0.00403139 0.00349464] 0.008082063352157581\n[0.01022131 0.00844891 0.00403139 0.00349464 0.00808206] 0.007197793428418392\n[0.00844891 0.00403139 0.00349464 0.00808206 0.00719779] 0.008132263784159405\n[0.00403139 0.00349464 0.00808206 0.00719779 0.00813226] 0.008410278937790727\n[0.00349464 0.00808206 0.00719779 0.00813226 0.00841028] 0.01006299554607934\n[0.00808206 0.00719779 0.00813226 0.00841028 0.010063 ] 0.014395563601181854\n[0.00719779 0.00813226 0.00841028 0.010063 0.01439556] 0.019322793482390965\n[0.00813226 0.00841028 0.010063 0.01439556 0.01932279] 0.01842693138983878\n[0.00841028 0.010063 0.01439556 0.01932279 0.01842693] 0.019616280623272198\n[0.010063 0.01439556 0.01932279 0.01842693 0.01961628] 0.02023796009413173\n[0.01439556 0.01932279 0.01842693 0.01961628 0.02023796] 0.01809486001661198\n[0.01932279 0.01842693 0.01961628 0.02023796 0.01809486] 0.020523711628982255\n[0.01842693 0.01961628 0.02023796 0.01809486 0.02052371] 0.01958152892286541\n[0.01961628 0.02023796 0.01809486 0.02052371 0.01958153] 0.017959708227390014\n[0.02023796 0.01809486 0.02052371 0.01958153 0.01795971] 0.01833041111946071\n[0.01809486 0.02052371 0.01958153 0.01795971 0.01833041] 0.017604453291725913\n[0.02052371 0.01958153 0.01795971 0.01833041 0.01760445] 0.021218774319092498\n[0.01958153 0.01795971 0.01833041 0.01760445 0.02121877] 0.018878730626724488\n[0.01795971 0.01833041 0.01760445 0.02121877 0.01887873] 0.018172075767801388\n[0.01833041 0.01760445 0.02121877 0.01887873 0.01817208] 0.019635585142461\n[0.01760445 0.02121877 0.01887873 0.01817208 0.01963559] 0.01940774561479955\n[0.02121877 0.01887873 0.01817208 0.01963559 0.01940775] 0.022913954978163564\n[0.01887873 0.01817208 0.01963559 0.01940775 0.02291395] 0.022859898758568087\n[0.01817208 0.01963559 0.01940775 0.02291395 0.0228599 ] 0.02446241571004326\n[0.01963559 0.01940775 0.02291395 0.0228599 0.02446242] 0.02401835207918837\n[0.01940775 0.02291395 0.0228599 0.02446242 0.02401835] 0.021315317845125498\n[0.02291395 0.0228599 0.02446242 0.02401835 0.02131532] 0.017728012912134966\n[0.0228599 0.02446242 0.02401835 0.02131532 0.01772801] 0.018438523558651693\n[0.02446242 0.02401835 0.02131532 0.01772801 0.01843852] 0.018569818785091535\n[0.02401835 0.02131532 0.01772801 0.01843852 0.01856982] 0.015909271857466267\n[0.02131532 0.01772801 0.01843852 0.01856982 0.01590927] 0.017801372875730692\n[0.01772801 0.01843852 0.01856982 0.01590927 0.01780137] 0.022678427130969847\n[0.01843852 0.01856982 0.01590927 0.01780137 0.02267843] 0.021987220228453314\n[0.01856982 0.01590927 0.01780137 0.02267843 0.02198722] 0.02397200820996867\n[0.01590927 0.01780137 0.02267843 0.02198722 0.02397201] 0.025481837423004527\n[0.01780137 0.02267843 0.02198722 0.02397201 0.02548184] 0.028933990566773066\n[0.02267843 0.02198722 0.02397201 0.02548184 0.02893399] 0.028744783333144064\n[0.02198722 0.02397201 0.02548184 0.02893399 0.02874478] 0.03125087395235859\n[0.02397201 0.02548184 0.02893399 0.02874478 0.03125087] 0.033594797463163684\n[0.02548184 0.02893399 0.02874478 0.03125087 0.0335948 ] 0.032258728302538814\n[0.02893399 0.02874478 0.03125087 0.0335948 0.03225873] 0.034861339192131496\n[0.02874478 0.03125087 0.0335948 0.03225873 0.03486134] 0.03396935691801653\n[0.03125087 0.0335948 0.03225873 0.03486134 0.03396936] 0.03734427027975226\n[0.0335948 0.03225873 0.03486134 0.03396936 0.03734427] 0.03450608425646737\n[0.03225873 0.03486134 0.03396936 0.03734427 0.03450608] 0.03173742488965106\n[0.03486134 0.03396936 0.03734427 0.03450608 0.03173742] 0.033930725399172634\n[0.03396936 0.03734427 0.03450608 0.03173742 0.03393073] 0.03227029644050827\n[0.03734427 0.03450608 0.03173742 0.03393073 0.0322703 ] 0.029385789803658746\n[0.03450608 0.03173742 0.03393073 0.0322703 0.02938579] 0.028292984096258328\n[0.03173742 0.03393073 0.0322703 0.02938579 0.02829298] 0.027721505057400703\n[0.03393073 0.0322703 0.02938579 0.02829298 0.02772151] 0.02618463649433392\n[0.0322703 0.02938579 0.02829298 0.02772151 0.02618464] 0.029698557432885242\n[0.02938579 0.02829298 0.02772151 0.02618464 0.02969856] 0.03551008204386544\n[0.02829298 0.02772151 0.02618464 0.02969856 0.03551008] 0.025311933933375746\n[0.02772151 0.02618464 0.02969856 0.03551008 0.02531193] 0.01815662781139482\n[0.02618464 0.02969856 0.03551008 0.02531193 0.01815663] 0.019508145703614754\n[0.02969856 0.03551008 0.02531193 0.01815663 0.01950815] 0.01868952261790699\n[0.03551008 0.02531193 0.01815663 0.01950815 0.01868952] 0.015542424752989198\n[0.02531193 0.01815663 0.01950815 0.01868952 0.01554242] 0.014291307724772967\n[0.01815663 0.01950815 0.01868952 0.01554242 0.01429131] 0.014885970326067977\n[0.01950815 0.01868952 0.01554242 0.01429131 0.01488597] 0.016303135831507803\n[0.01868952 0.01554242 0.01429131 0.01488597 0.01630314] 0.01956608096645887\n[0.01554242 0.01429131 0.01488597 0.01630314 0.01956608] 0.0217864208260114\n[0.01429131 0.01488597 0.01630314 0.01956608 0.02178642] 0.0215972128171939\n[0.01488597 0.01630314 0.01956608 0.02178642 0.02159721] 0.017685548861352374\n[0.01630314 0.01956608 0.02178642 0.02159721 0.01768555] 0.0175233569469109\n[0.01956608 0.02178642 0.02159721 0.01768555 0.01752336] 0.01861616265431129\n[0.02178642 0.02159721 0.01768555 0.01752336 0.01861616] 0.02006805660450295\n[0.02159721 0.01768555 0.01752336 0.01861616 0.02006806] 0.014642706872843442\n[0.01768555 0.01752336 0.01861616 0.02006806 0.01464271] 0.018728130881095928\n[0.01752336 0.01861616 0.02006806 0.01464271 0.01872813] 0.020751550381455153\n[0.01861616 0.02006806 0.01464271 0.01872813 0.02075155] 0.019345977820016652\n[0.02006806 0.01464271 0.01872813 0.02075155 0.01934598] 0.021087502348307557\n[0.01464271 0.01872813 0.02075155 0.01934598 0.0210875 ] 0.017832268788543826\n[0.01872813 0.02075155 0.01934598 0.0210875 0.01783227] 0.017357309244875774\n[0.02075155 0.01934598 0.0210875 0.01783227 0.01735731] 0.02010280830490971\n[0.01934598 0.0210875 0.01783227 0.01735731 0.02010281] 0.02698395428566444\n[0.0210875 0.01783227 0.01735731 0.02010281 0.02698395] 0.02471341476929853\n[0.01783227 0.01735731 0.02010281 0.02698395 0.02471341] 0.0159671838646554\n[0.01735731 0.02010281 0.02698395 0.02471341 0.01596718] 0.021261261625529992\n[0.02010281 0.02698395 0.02471341 0.01596718 0.02126126] 0.01612164017306611\n[0.02698395 0.02471341 0.01596718 0.02126126 0.01612164] 0.019465681652832217\n[0.02471341 0.01596718 0.02126126 0.01612164 0.01946568] 0.01722601401843593\n[0.01596718 0.02126126 0.01612164 0.01946568 0.01722601] 0.01254207084610634\n[0.02126126 0.01612164 0.01946568 0.01722601 0.01254207] 0.013627140947475913\n[0.01612164 0.01946568 0.01722601 0.01254207 0.01362714] 0.011267769480264367\n[0.01946568 0.01722601 0.01254207 0.01362714 0.01126777] 0.00842187110660364\n[0.01722601 0.01254207 0.01362714 0.01126777 0.00842187] 0.004606750676795168\n[0.01254207 0.01362714 0.01126777 0.00842187 0.00460675] 0.007545312758051759\n[0.01362714 0.01126777 0.00842187 0.00460675 0.00754531] 0.004386646755164536\n[0.01126777 0.00842187 0.00460675 0.00754531 0.00438665] 0.00830987962416399\n[0.00842187 0.00460675 0.00754531 0.00438665 0.00830988] 0.007406305181236111\n[0.00460675 0.00754531 0.00438665 0.00830988 0.00740631] 0.008263559010599275\n[0.00754531 0.00438665 0.00830988 0.00740631 0.00826356] 0.003649096758616799\n[0.00438665 0.00830988 0.00740631 0.00826356 0.0036491 ] 0.002336187129585776\n[0.00830988 0.00740631 0.00826356 0.0036491 0.00233619] 0.000787749653361064\n[0.00740631 0.00826356 0.0036491 0.00233619 0.00078775] 0.0\n[0.00826356 0.0036491 0.00233619 0.00078775 0. ] 0.0021083716327678637\n[0.0036491 0.00233619 0.00078775 0. 0.00210837] 0.004556550244793289\n[0.00233619 0.00078775 0. 0.00210837 0.00455655] 0.00868058251623477\n[0.00078775 0. 0.00210837 0.00455655 0.00868058] 0.007128289252416348\n[0. 0.00210837 0.00455655 0.00868058 0.00712829] 0.002378675211211878\n[0.00210837 0.00455655 0.00868058 0.00712829 0.00237868] 0.004799814473206321\n[0.00455655 0.00868058 0.00712829 0.00237868 0.00479981] 0.0006371506829210283\n[0.00868058 0.00712829 0.00237868 0.00479981 0.00063715] 0.0021740192459878127\n[0.00712829 0.00237868 0.00479981 0.00063715 0.00217402] 0.003958007825061227\n[0.00237868 0.00479981 0.00063715 0.00217402 0.00395801] 0.0020311318507350573\n[0.00479981 0.00063715 0.00217402 0.00395801 0.00203113] 0.008116815827752866\n[0.00063715 0.00217402 0.00395801 0.00203113 0.00811682] 0.00827127136097508\n[0.00217402 0.00395801 0.00203113 0.00811682 0.00827127] 0.011113313947042097\n[0.00395801 0.00203113 0.00811682 0.00827127 0.01111331] 0.01423725150517749\n[0.00203113 0.00811682 0.00827127 0.01111331 0.01423725] 0.010398947512931689\n[0.00811682 0.00827127 0.01111331 0.01423725 0.01039895] 0.010410538906556105\n[0.00827127 0.01111331 0.01423725 0.01039895 0.01041054] 0.008746230129454635\n[0.01111331 0.01423725 0.01039895 0.01041054 0.00874623] 0.004687822215578175\n[0.01423725 0.01039895 0.01041054 0.00874623 0.00468782] 0.004413662849540617\n[0.01039895 0.01041054 0.00874623 0.00468782 0.00441366] 0.0067459941915327115\n[0.01041054 0.00874623 0.00468782 0.00441366 0.00674599] 0.006128171283455386\n[0.00874623 0.00468782 0.00441366 0.00674599 0.00612817] 0.006827089761159227\n[0.00468782 0.00441366 0.00674599 0.00612817 0.00682709] 0.008383262843414702\n[0.00441366 0.00674599 0.00612817 0.00682709 0.00838326] 0.01313285362896427\n[0.00674599 0.00612817 0.00682709 0.00838326 0.01313285] 0.019990840078125127\n[0.00612817 0.00682709 0.00838326 0.01313285 0.01999084] 0.021801868782417966\n[0.00682709 0.00838326 0.01313285 0.01999084 0.02180187] 0.018735866487126662\n[0.00838326 0.01313285 0.01999084 0.02180187 0.01873587] 0.01889803437072471\n[0.01313285 0.01999084 0.02180187 0.01873587 0.01889803] 0.020971654303085785\n[0.01999084 0.02180187 0.01873587 0.01889803 0.02097165] 0.029312429840062965\n[0.02180187 0.01873587 0.01889803 0.02097165 0.02931243] 0.028285271745882523\n[0.01873587 0.01889803 0.02097165 0.02931243 0.02828527] 0.028543983930702066\n[0.01889803 0.02097165 0.02931243 0.02828527 0.02854398] 0.02843201570391743\n[0.02097165 0.02931243 0.02828527 0.02854398 0.02843202] 0.026914450884850866\n[0.02931243 0.02828527 0.02854398 0.02843202 0.02691445] 0.031772177365246346\n[0.02828527 0.02854398 0.02843202 0.02691445 0.03177218] 0.03234753622254108\n[0.02854398 0.02843202 0.02691445 0.03177218 0.03234754] 0.03146712208639557\n[0.02843202 0.02691445 0.03177218 0.03234754 0.03146712] 0.03211969669487971\n[0.02691445 0.03177218 0.03234754 0.03146712 0.0321197 ] 0.03216604056409947\n[0.03177218 0.03234754 0.03146712 0.0321197 0.03216604] 0.03203088877487745\n[0.03234754 0.03146712 0.0321197 0.03216604 0.03203089] 0.03898539549441635\n[0.03146712 0.0321197 0.03216604 0.03203089 0.0389854 ] 0.04232558041140025\n[0.0321197 0.03216604 0.03203089 0.0389854 0.04232558] 0.041893085693703286\n[0.03216604 0.03203089 0.0389854 0.04232558 0.04189309] 0.0439512568944693\n[0.03203089 0.0389854 0.04232558 0.04189309 0.04395126] 0.04560009445950927\n[0.0389854 0.04232558 0.04189309 0.04395126 0.04560009] 0.04498227155143189\n[0.04232558 0.04189309 0.04395126 0.04560009 0.04498227] 0.04265379599703342\n[0.04189309 0.04395126 0.04560009 0.04498227 0.0426538 ] 0.04561554241591584\n[0.04395126 0.04560009 0.04498227 0.0426538 0.04561554] 0.045468798457880905\n[0.04560009 0.04498227 0.0426538 0.04561554 0.0454688 ] 0.04617545331680398\n[0.04498227 0.0426538 0.04561554 0.0454688 0.04617545] 0.042541804514593745\n[0.0426538 0.04561554 0.0454688 0.04617545 0.0425418 ] 0.045642581765946794\n[0.04561554 0.0454688 0.04617545 0.0425418 0.04564258] 0.049797509950201296\n[0.0454688 0.04617545 0.0425418 0.04564258 0.04979751] 0.04735321038142451\n[0.04617545 0.0425418 0.04564258 0.04979751 0.04735321] 0.04866995254239423\n[0.0425418 0.04564258 0.04979751 0.04735321 0.04866995] 0.04710606710976292\n[0.04564258 0.04979751 0.04735321 0.04866995 0.04710607] 0.046982507489353814\n[0.04979751 0.04735321 0.04866995 0.04710607 0.04698251] 0.05419572561852384\n[0.04735321 0.04866995 0.04710607 0.04698251 0.05419573] 0.056802216326553606\n[0.04866995 0.04710607 0.04698251 0.05419573 0.05680222] 0.054724740606598904\n[0.04710607 0.04698251 0.05419573 0.05680222 0.05472474] 0.05141159503964596\n[0.04698251 0.05419573 0.05680222 0.05472474 0.0514116 ] 0.050905763614008254\n[0.05419573 0.05680222 0.05472474 0.0514116 0.05090576] 0.056759728244927615\n[0.05680222 0.05472474 0.0514116 0.05090576 0.05675973] 0.05816532561239798\n[0.05472474 0.0514116 0.05090576 0.05675973 0.05816533] 0.0582811496267763\n[0.0514116 0.05090576 0.05675973 0.05816533 0.05828115] 0.06148618275453824\n[0.05090576 0.05675973 0.05816533 0.05828115 0.06148618] 0.0650464483374977\n[0.05675973 0.05816533 0.05828115 0.06148618 0.06504645] 0.06528585522794009\n[0.05816533 0.05828115 0.06148618 0.06504645 0.06528586] 0.06755253895671234\n[0.05828115 0.06148618 0.06504645 0.06528586 0.06755254] 0.0734953355384774\n[0.06148618 0.06504645 0.06528586 0.06755254 0.07349534] 0.07341038340606879\n[0.06504645 0.06528586 0.06755254 0.07349534 0.07341038] 0.07507083639557657\n[0.06528586 0.06755254 0.07349534 0.07341038 0.07507084] 0.07624471364175997\n[0.06755254 0.07349534 0.07341038 0.07507084 0.07624471] 0.07547241194480545\n[0.07349534 0.07341038 0.07507084 0.07624471 0.07547241] 0.0782835578428708\n[0.07341038 0.07507084 0.07624471 0.07547241 0.07828356] 0.07643392165057747\n[0.07507084 0.07624471 0.07547241 0.07828356 0.07643392] 0.07856159702734558\n[0.07624471 0.07547241 0.07828356 0.07643392 0.0785616 ] 0.08070469710486525\n[0.07547241 0.07828356 0.07643392 0.0785616 0.0807047 ] 0.08055024157164306\n[0.07828356 0.07643392 0.0785616 0.0807047 0.08055024] 0.07676213723621059\n[0.07643392 0.0785616 0.0807047 0.08055024 0.07676214] 0.07145263554977291\n[0.0785616 0.0807047 0.08055024 0.07676214 0.07145264] 0.07163410717737109\n[0.0807047 0.08055024 0.07676214 0.07145264 0.07163411] 0.07430624627380936\n[0.08055024 0.07676214 0.07145264 0.07163411 0.07430625] 0.07170749117181033\n[0.07676214 0.07145264 0.07163411 0.07430625 0.07170749] 0.07025171740318156\n[0.07145264 0.07163411 0.07430625 0.07170749 0.07025172] 0.07168818665262158\n[0.07163411 0.07430625 0.07170749 0.07025172 0.07168819] 0.07585470700568905\n[0.07430625 0.07170749 0.07025172 0.07168819 0.07585471] 0.052496688396670516\n[0.07170749 0.07025172 0.07168819 0.07585471 0.05249669] 0.047395674432207074\n[0.07025172 0.07168819 0.07585471 0.05249669 0.04739567] 0.046190901273210544\n[0.07168819 0.07585471 0.05249669 0.04739567 0.0461909 ] 0.04683962009410106\n[0.07585471 0.05249669 0.04739567 0.0461909 0.04683962] 0.045661885509947014\n[0.05249669 0.04739567 0.0461909 0.04683962 0.04566189] 0.0458395013499516\n[0.04739567 0.0461909 0.04683962 0.04566189 0.0458395 ] 0.0448316718058033\n[0.0461909 0.04683962 0.04566189 0.0458395 0.04483167] 0.04682031557491237\n[0.04683962 0.04566189 0.0458395 0.04483167 0.04682032] 0.04963531803575985\n[0.04566189 0.0458395 0.04483167 0.04682032 0.04963532] 0.049762758249794564\n[0.0458395 0.04483167 0.04682032 0.04963532 0.04976276] 0.04784747366909278\n[0.04483167 0.04682032 0.04963532 0.04976276 0.04784747] 0.04736865833783108\n[0.04682032 0.04963532 0.04976276 0.04784747 0.04736866] 0.041730893779261785\n[0.04963532 0.04976276 0.04784747 0.04736866 0.04173089] 0.03600434533153371\n[0.04976276 0.04784747 0.04736866 0.04173089 0.03600435] 0.040151561165412436\n[0.04784747 0.04736866 0.04173089 0.03600435 0.04015156] 0.04125981482921939\n[0.04736866 0.04173089 0.03600435 0.04015156 0.04125981] 0.03861469182715718\n[0.04173089 0.03600435 0.04015156 0.04125981 0.03861469] 0.036104744645160475\n[0.03600435 0.04015156 0.04125981 0.03861469 0.03610474] 0.03406202062561245\n[0.04015156 0.04125981 0.03861469 0.03610474 0.03406202] 0.034031124712799316\n[0.04125981 0.03861469 0.03610474 0.03406202 0.03403112] 0.04215179710333436\n[0.03861469 0.03610474 0.03406202 0.03403112 0.0421518 ] 0.04283141183703801\n[0.03610474 0.03406202 0.03403112 0.0421518 0.04283141] 0.041248222660406425\n[0.03406202 0.03403112 0.0421518 0.04283141 0.04124822] 0.042059132620549916\n[0.03403112 0.0421518 0.04283141 0.04124822 0.04205913] 0.03942560256248914\n[0.0421518 0.04283141 0.04124822 0.04205913 0.0394256 ] 0.043117163371888506\n[0.04283141 0.04124822 0.04205913 0.0394256 0.04311716] 0.04812163303513037\n[0.04124822 0.04205913 0.0394256 0.04311716 0.04812163] 0.05129577102526764\n[0.04205913 0.0394256 0.04311716 0.04812163 0.05129577] 0.053797981826045144\n[0.0394256 0.04311716 0.04812163 0.05129577 0.05379798] 0.052593208667048724\n[0.04311716 0.04812163 0.05129577 0.05379798 0.05259321] 0.050963675621197385\n[0.04812163 0.05129577 0.05379798 0.05259321 0.05096368] 0.0497241499866056\n[0.05129577 0.05379798 0.05259321 0.05096368 0.04972415] 0.05100228388438635\n[0.05379798 0.05259321 0.05096368 0.04972415 0.05100228] 0.04833014478794814\n[0.05259321 0.05096368 0.04972415 0.05100228 0.04833014] 0.04879739198124047\n[0.05096368 0.04972415 0.05100228 0.04833014 0.04879739] 0.05322262296875038\n[0.04972415 0.05100228 0.04833014 0.04879739 0.05322262] 0.05348521497200706\n[0.05100228 0.04833014 0.04879739 0.05322262 0.05348521] 0.05547000295352242\n[0.04833014 0.04879739 0.05322262 0.05348521 0.05547 ] 0.055184251418671865\n[0.04879739 0.05322262 0.05348521 0.05547 0.05518425] 0.06245154902309233\n[0.05322262 0.05348521 0.05547 0.05518425 0.06245155] 0.06256351724987705\n[0.05348521 0.05547 0.05518425 0.06245155 0.06256352] 0.06219283761346131\n[0.05547 0.05518425 0.06245155 0.06256352 0.06219284] 0.06306165958079393\n[0.05518425 0.06245155 0.06256352 0.06219284 0.06306166] 0.06046288122313989\n[0.06245155 0.06256352 0.06219284 0.06306166 0.06046288] 0.05809579740555251\n[0.06256352 0.06219284 0.06306166 0.06046288 0.0580958 ] 0.05785253395232795\n[0.06219284 0.06306166 0.06046288 0.0580958 0.05785253] 0.056856271770960615\n[0.06306166 0.06046288 0.0580958 0.05785253 0.05685627] 0.05443127672137252\n[0.06046288 0.0580958 0.05785253 0.05685627 0.05443128] 0.05727717509503319\n[0.0580958 0.05785253 0.05685627 0.05443128 0.05727718] 0.06340532312283362\n[0.05785253 0.05685627 0.05443128 0.05727718 0.06340532] 0.06356751503727512\n[0.05685627 0.05443128 0.05727718 0.06340532 0.06356752] 0.0690894082949676\n[0.05443128 0.05727718 0.06340532 0.06356752 0.06908941] 0.06784602609759369\n[0.05727718 0.06340532 0.06356752 0.06908941 0.06784603] 0.06728609194105048\n[0.06340532 0.06356752 0.06908941 0.06784603 0.06728609] 0.06914732030215678\n[0.06356752 0.06908941 0.06784603 0.06728609 0.06914732] 0.07044478119959272\n[0.06908941 0.06784603 0.06728609 0.06914732 0.07044478] 0.0698694223422979\n[0.06784603 0.06728609 0.06914732 0.07044478 0.06986942] 0.06340532312283362\n[0.06728609 0.06914732 0.07044478 0.06986942 0.06340532] 0.06405406597456756\n[0.06914732 0.07044478 0.06986942 0.06340532 0.06405407] 0.06029297773351108\n[0.07044478 0.06986942 0.06340532 0.06405407 0.06029298] 0.058798596476881876\n[0.06986942 0.06340532 0.06405407 0.06029298 0.0587986 ] 0.056168945462069714\n[0.06340532 0.06405407 0.06029298 0.0587986 0.05616895] 0.05553565056674234\n[0.06405407 0.06029298 0.0587986 0.05616895 0.05553565] 0.07045251758081197\n[0.06029298 0.0587986 0.05616895 0.05553565 0.07045252] 0.07536044371802078\n[0.0587986 0.05616895 0.05553565 0.07045252 0.07536044] 0.07515190793435958\n[0.05616895 0.05553565 0.07045252 0.07536044 0.07515191] 0.07401663817617662\n[0.05553565 0.07045252 0.07536044 0.07515191 0.07401664] 0.07515578775279672\n[0.07045252 0.07536044 0.07515191 0.07401664 0.07515579] 0.07521369975998585\n[0.07536044 0.07515191 0.07401664 0.07515579 0.0752137 ] 0.0759319227568784\n[0.07515191 0.07401664 0.07515579 0.0752137 0.07593192] 0.0836201236853715\n[0.07401664 0.07515579 0.0752137 0.07593192 0.08362012] 0.07721779226068995\n[0.07515579 0.0752137 0.07593192 0.08362012 0.07721779] 0.07981271483075028\n[0.0752137 0.07593192 0.08362012 0.07721779 0.07981271] 0.081523343446228\n[0.07593192 0.08362012 0.07721779 0.07981271 0.08152334] 0.08298297300245042\n[0.08362012 0.07721779 0.07981271 0.08152334 0.08298297] 0.08739278006439735\n[0.07721779 0.07981271 0.08152334 0.08298297 0.08739278] 0.08625363048777732\n[0.07981271 0.08152334 0.08298297 0.08739278 0.08625363] 0.08558946371048023\n[0.08152334 0.08298297 0.08739278 0.08625363 0.08558946] 0.08642353397740607\n[0.08298297 0.08739278 0.08625363 0.08558946 0.08642353] 0.08833884181376281\n[0.08739278 0.08625363 0.08558946 0.08642353 0.08833884] 0.09029659044524715\n[0.08625363 0.08558946 0.08642353 0.08833884 0.09029659] 0.0956872117321548\n[0.08558946 0.08642353 0.08833884 0.09029659 0.09568721] 0.090130542743212\n[0.08642353 0.08833884 0.09029659 0.09568721 0.09013054] 0.09131601293339683\n[0.08833884 0.09029659 0.09568721 0.09013054 0.09131601] 0.09293012127849645\n[0.09029659 0.09568721 0.09013054 0.09131601 0.09293012] 0.08947796813472789\n[0.09568721 0.09013054 0.09131601 0.09293012 0.08947797] 0.08923082486306624\n[0.09013054 0.09131601 0.09293012 0.08947797 0.08923082] 0.09295713737287245\n[0.09131601 0.09293012 0.08947797 0.08923082 0.09295714] 0.09350548013579121\n[0.09293012 0.08947797 0.08923082 0.09295714 0.09350548] 0.09543235611011738\n[0.08947797 0.08923082 0.09295714 0.09350548 0.09543236] 0.1013442567790693\n[0.08923082 0.09295714 0.09350548 0.09543236 0.10134426] 0.10794736357534937\n[0.09295714 0.09350548 0.09543236 0.10134426 0.10794736] 0.10515937720887777\n[0.09350548 0.09543236 0.10134426 0.10794736 0.10515938] 0.1056304801897637\n[0.09543236 0.10134426 0.10794736 0.10515938 0.10563048] 0.10521345668412826\n[0.10134426 0.10794736 0.10515938 0.10563048 0.10521346] 0.10648773479431536\n[0.10794736 0.10515938 0.10563048 0.10521346 0.10648773] 0.10370360421543748\n[0.10515938 0.10563048 0.10521346 0.10648773 0.1037036 ] 0.10281547772891617\n[0.10563048 0.10521346 0.10648773 0.1037036 0.10281548] 0.10135970473547587\n[0.10521346 0.10648773 0.1037036 0.10281548 0.1013597 ] 0.09856400601862861\n[0.10648773 0.1037036 0.10281548 0.1013597 0.09856401] 0.09605017901819471\n[0.1037036 0.10281548 0.1013597 0.09856401 0.09605018] 0.09741328830403909\n[0.10281548 0.1013597 0.09856401 0.09605018 0.09741329] 0.09872231811463292\n[0.1013597 0.09856401 0.09605018 0.09741329 0.09872232] 0.09739010474160173\n[0.09856401 0.09605018 0.09741329 0.09872232 0.0973901 ] 0.0970232808927797\n[0.09605018 0.09741329 0.09872232 0.0973901 0.09702328] 0.09676454622749361\n[0.09741329 0.09872232 0.0973901 0.09702328 0.09676455] 0.09783804741570526\n[0.09872232 0.0973901 0.09702328 0.09676455 0.09783805] 0.094069246824273\n[0.0973901 0.09702328 0.09676455 0.09783805 0.09406925] 0.09079858933894616\n[0.09702328 0.09676455 0.09783805 0.09406925 0.09079859] 0.09350162434819756\n[0.09676455 0.09783805 0.09406925 0.09079859 0.09350162] 0.09807745508133611\n[0.09783805 0.09406925 0.09079859 0.09350162 0.09807746] 0.0954362126728995\n[0.09406925 0.09079859 0.09350162 0.09807746 0.09543621] 0.09113839709339225\n[0.09079859 0.09350162 0.09807746 0.09543621 0.0911384 ] 0.08649691797184525\n[0.09350162 0.09807746 0.09543621 0.0911384 0.08649692] 0.08333051558773877\n[0.09807746 0.09543621 0.0911384 0.08649692 0.08333052] 0.08441170587067123\n[0.09543621 0.0911384 0.08649692 0.08333052 0.08441171] 0.08925012860706652\n[0.0911384 0.08649692 0.08333052 0.08441171 0.08925013] 0.08933122417669301\n[0.08649692 0.08333052 0.08441171 0.08925013 0.08933122] 0.08919992895025317\n[0.08333052 0.08441171 0.08925013 0.08933122 0.08919993] 0.08606442247895982\n[0.08441171 0.08925013 0.08933122 0.08919993 0.08606442] 0.09048194189128247\n[0.08925013 0.08933122 0.08919993 0.08606442 0.09048194] 0.08630770996302775\n[0.08933122 0.08919993 0.08606442 0.09048194 0.08630771] 0.0798783624439702\n[0.08919993 0.08606442 0.09048194 0.08630771 0.07987836] 0.09299188907327924\n[0.08606442 0.09048194 0.08630771 0.07987836 0.09299189] 0.09308457681171864\n[0.09048194 0.08630771 0.07987836 0.09299189 0.09308458] 0.09609266709982076\n[0.08630771 0.07987836 0.09299189 0.09308458 0.09609267] 0.09823576717734048\n[0.07987836 0.09299189 0.09308458 0.09609267 0.09823577] 0.0965560344746759\n[0.09299189 0.09308458 0.09609267 0.09823577 0.09655603] 0.09359043226819982\n[0.09308458 0.09609267 0.09823577 0.09655603 0.09359043] 0.1004020508172975\n[0.09609267 0.09823577 0.09655603 0.09359043 0.10040205] 0.10252972619406561\n[0.09823577 0.09655603 0.09359043 0.10040205 0.10252973] 0.10093108983621604\n[0.09655603 0.09359043 0.10040205 0.10252973 0.10093109] 0.10447590668758053\n[0.09359043 0.10040205 0.10252973 0.10093109 0.10447591] 0.11069674245382036\n[0.10040205 0.10252973 0.10093109 0.10447591 0.11069674] 0.11680944252521427\n[0.10252973 0.10093109 0.10447591 0.11069674 0.11680944] 0.11514127796051901\n[0.10093109 0.10447591 0.11069674 0.11680944 0.11514128] 0.12147410520919852\n[0.10447591 0.11069674 0.11680944 0.11514128 0.12147411] 0.12064386669902291\n[0.11069674 0.11680944 0.11514128 0.12147411 0.12064387] 0.12402266065438419\n[0.11680944 0.11514128 0.12147411 0.12064387 0.12402266] 0.12298007785945217\n[0.11514128 0.12147411 0.12064387 0.12402266 0.12298008] 0.12667549523163377\n[0.12147411 0.12064387 0.12402266 0.12298008 0.1266755 ] 0.13779266496626943\n[0.12064387 0.12402266 0.12298008 0.1266755 0.13779266] 0.13315118584472244\n[0.12402266 0.12298008 0.1266755 0.13779266 0.13315119] 0.13520162221464624\n[0.12298008 0.1266755 0.13779266 0.13315119 0.13520162] 0.134950623155391\n[0.1266755 0.13779266 0.13315119 0.13520162 0.13495062] 0.13434822417287676\n[0.13779266 0.13315119 0.13520162 0.13495062 0.13434822] 0.1275713573241858\n[0.13315119 0.13520162 0.13495062 0.13434822 0.12757136] 0.1250112104853757\n[0.13520162 0.13495062 0.13434822 0.12757136 0.12501121] 0.12135437734553964\n[0.13495062 0.13434822 0.12757136 0.12501121 0.12135438] 0.12442425945926802\n[0.13434822 0.12757136 0.12501121 0.12135438 0.12442426] 0.11941978902083764\n[0.12757136 0.12501121 0.12135438 0.12442426 0.11941979] 0.12036585154539162\n[0.12501121 0.12135438 0.12442426 0.11941979 0.12036585] 0.12054349064105116\n[0.12135438 0.12442426 0.11941979 0.12036585 0.12054349] 0.11915722182361296\n[0.12442426 0.11941979 0.12036585 0.12054349 0.11915722] 0.11586338000066024\n[0.11941979 0.12036585 0.12054349 0.11915722 0.11586338] 0.11609507609110378\n[0.12036585 0.12054349 0.11915722 0.11586338 0.11609508] 0.1180026242905863\n[0.12054349 0.11915722 0.11586338 0.11609508 0.11800262] 0.1238295968579731\n[0.11915722 0.11586338 0.11609508 0.11800262 0.1238296 ] 0.12788026839063024\n[0.11586338 0.11609508 0.11800262 0.1238296 0.12788027] 0.12386049277078617\n[0.11609508 0.11800262 0.1238296 0.12788027 0.12386049] 0.12083695452627755\n[0.11800262 0.1238296 0.12788027 0.12386049 0.12083695] 0.12277539863857315\n[0.1238296 0.12788027 0.12386049 0.12083695 0.1227754 ] 0.12201856815368015\n[0.12788027 0.12386049 0.12083695 0.1227754 0.12201857] 0.1263472796460006\n[0.12386049 0.12083695 0.1227754 0.12201857 0.12634728] 0.13189618899806912\n[0.12083695 0.1227754 0.12201857 0.12634728 0.13189619] 0.13191937256050648\n[0.1227754 0.12201857 0.12634728 0.13189619 0.13191937] 0.12576418518267504\n[0.12201857 0.12634728 0.13189619 0.13191937 0.12576419] 0.12429296345763971\n[0.12634728 0.13189619 0.13191937 0.12576419 0.12429296] 0.1199912920905388\n[0.13189619 0.13191937 0.12576419 0.12429296 0.11999129] 0.11860502327310055\n[0.13191937 0.12576419 0.12429296 0.11999129 0.11860502] 0.12148181755957435\n[0.12576419 0.12429296 0.11999129 0.11860502 0.12148182] 0.12280243876379263\n[0.12429296 0.11999129 0.11860502 0.12148182 0.12280244] 0.1240767160987912\n[0.11999129 0.11860502 0.12148182 0.12280244 0.12407672] 0.12697669472289083\n[0.11860502 0.12148182 0.12280244 0.12407672 0.12697669] 0.1248258582641519\n[0.12148182 0.12280244 0.12407672 0.12697669 0.12482586] 0.12641678382200264\n[0.12280244 0.12407672 0.12697669 0.12482586 0.12641678] 0.1291429783628479\n[0.12407672 0.12697669 0.12482586 0.12641678 0.12914298] 0.1336222655699534\n[0.12697669 0.12482586 0.12641678 0.12914298 0.13362227] 0.13368020160798602\n[0.12482586 0.12641678 0.12914298 0.13362227 0.1336802 ] 0.13396980893043023\n[0.12641678 0.12914298 0.13362227 0.1336802 0.13396981] 0.13947239766893407\n[0.12914298 0.13362227 0.1336802 0.13396981 0.1394724 ] 0.1405381640263034\n[0.13362227 0.1336802 0.13396981 0.1394724 0.14053816] 0.141190762665631\n[0.1336802 0.13396981 0.1394724 0.14053816 0.14119076] 0.13922913421570957\n[0.13396981 0.1394724 0.14053816 0.14119076 0.13922913] 0.1388198230604499\n[0.1394724 0.14053816 0.14119076 0.13922913 0.13881982] 0.13578083685953465\n[0.14053816 0.14119076 0.13922913 0.13881982 0.13578084] 0.13034389573425073\n[0.14119076 0.13922913 0.13881982 0.13578084 0.1303439 ] 0.1357885492099105\n[0.13922913 0.13881982 0.13578084 0.1303439 0.13578855] 0.13312414649469154\n[0.13881982 0.13578084 0.1303439 0.13578855 0.13312415] 0.13277660313421466\n[0.13578084 0.1303439 0.13578855 0.13312415 0.1327766 ] 0.1269071905468888\n[0.1303439 0.13578855 0.13312415 0.1327766 0.12690719] 0.12599973628552377\n[0.13578855 0.13312415 0.1327766 0.12690719 0.12599974] 0.12481041030774534\n[0.13312415 0.1327766 0.12690719 0.12599974 0.12481041] 0.12815057196907423\n[0.1327766 0.12690719 0.12599974 0.12481041 0.12815057] 0.126926494290889\n[0.12690719 0.12599974 0.12481041 0.12815057 0.12692649] 0.13328633763394446\n[0.12599974 0.12481041 0.12815057 0.12692649 0.13328634] 0.13419376863965451\n[0.12481041 0.12815057 0.12692649 0.13328634 0.13419377] 0.13358751386954662\n[0.12815057 0.12692649 0.13328634 0.13419377 0.13358751] 0.13033230356543782\n[0.12692649 0.13328634 0.13419377 0.13358751 0.1303323 ] 0.1280463160926654\n[0.13328634 0.13419377 0.13358751 0.1303323 0.12804632] 0.1288224751275905\n[0.13419377 0.13358751 0.1303323 0.12804632 0.12882248] 0.127953628354226\n[0.13358751 0.1303323 0.12804632 0.12882248 0.12795363] 0.12606152811115004\n[0.1303323 0.12804632 0.12882248 0.12795363 0.12606153] 0.12441652385323729\n[0.12804632 0.12882248 0.12795363 0.12606153 0.12441652] 0.11999900444091452\n[0.12882248 0.12795363 0.12606153 0.12441652 0.119999 ] 0.11607962813469727\n[0.12795363 0.12606153 0.12441652 0.119999 0.11607963] 0.11501771834011001\n[0.12606153 0.12441652 0.119999 0.11607963 0.11501772] 0.11839263170184569\n[0.12441652 0.119999 0.11607963 0.11501772 0.11839263] 0.11830382378184343\n[0.119999 0.11607963 0.11501772 0.11839263 0.11830382] 0.1198136529948792\n[0.11607963 0.11501772 0.11839263 0.11830382 0.11981365] 0.12150497709116817\n[0.11501772 0.11839263 0.11830382 0.11981365 0.12150498] 0.12015348400498024\n[0.11839263 0.11830382 0.11981365 0.12150498 0.12015348] 0.1186783832366963\n[0.11830382 0.11981365 0.12150498 0.12015348 0.11867838] 0.11240737106929802\n[0.11981365 0.12150498 0.12015348 0.11867838 0.11240737] 0.11178952413037727\n[0.12150498 0.12015348 0.11867838 0.11240737 0.11178952] 0.11444621527040896\n[0.12015348 0.11867838 0.11240737 0.11178952 0.11444622] 0.11115239747829972\n[0.11867838 0.11240737 0.11178952 0.11444622 0.1111524 ] 0.11635764328832851\n[0.11240737 0.11178952 0.11444622 0.1111524 0.11635764] 0.12070180273705552\n[0.11178952 0.11444622 0.1111524 0.11635764 0.1207018 ] 0.12376394846956465\n[0.11444622 0.1111524 0.11635764 0.1207018 0.12376395] 0.12377166081994043\n[0.1111524 0.11635764 0.1207018 0.12376395 0.12377166] 0.12704234233611075\n[0.11635764 0.1207018 0.12376395 0.12377166 0.12704234] 0.1272817500017416\n[0.1207018 0.12376395 0.12377166 0.12704234 0.12728175] 0.1301855603825914\n[0.12376395 0.12377166 0.12704234 0.12728175 0.13018556] 0.12897693066081276\n[0.12377166 0.12704234 0.12728175 0.13018556 0.12897693] 0.12743620553496385\n[0.12704234 0.12728175 0.13018556 0.12897693 0.12743621] 0.12693035085367113\n[0.12728175 0.13018556 0.12897693 0.12743621 0.12693035] 0.1262932001707501\n[0.13018556 0.12897693 0.12743621 0.12693035 0.1262932 ] 0.13293879504865616\n[0.12897693 0.12743621 0.12693035 0.1262932 0.1329388 ] 0.1310351026367673\n[0.12743621 0.12693035 0.1262932 0.1329388 0.1310351 ] 0.13285769870384115\n[0.12693035 0.1262932 0.1329388 0.1310351 0.1328577 ] 0.1264437999163787\n[0.1262932 0.1329388 0.1310351 0.1328577 0.1264438] 0.12657509514281853\n[0.1329388 0.1310351 0.1328577 0.1264438 0.1265751] 0.1228642305894189\n[0.1310351 0.1328577 0.1264438 0.1265751 0.12286423] 0.1232271978754588\n[0.1328577 0.1264438 0.1265751 0.12286423 0.1232272 ] 0.12253987079137943\n[0.1264438 0.1265751 0.12286423 0.1232272 0.12253987] 0.12235451934534405\n[0.1265751 0.12286423 0.1232272 0.12253987 0.12235452] 0.12663688696844475\n[0.12286423 0.1232272 0.12253987 0.12235452 0.12663689] 0.1270191587736734\n[0.1232272 0.12253987 0.12235452 0.12663689 0.12701916] 0.12242402352134615\n[0.12253987 0.12235452 0.12663689 0.12701916 0.12242402] 0.12097981866587529\n[0.12235452 0.12663689 0.12701916 0.12242402 0.12097982] 0.11842740743309596\n[0.12663689 0.12701916 0.12242402 0.12097982 0.11842741] 0.11376660053670526\n[0.12701916 0.12242402 0.12097982 0.11842741 0.1137666 ] 0.114612262972444\n[0.12242402 0.12097982 0.11842741 0.1137666 0.11461226] 0.11939277292646164\n[0.12097982 0.11842741 0.1137666 0.11461226 0.11939277] 0.12084081031387114\n[0.11842741 0.1137666 0.11461226 0.11939277 0.12084081] 0.12243173509653346\n[0.1137666 0.11461226 0.11939277 0.12084081 0.12243174] 0.12471001099411858\n[0.11461226 0.11939277 0.12084081 0.12243174 0.12471001] 0.1308960710291082\n[0.11939277 0.12084081 0.12243174 0.12471001 0.13089607] 0.12732806984011782\n[0.12084081 0.12243174 0.12471001 0.13089607 0.12732807] 0.17467740117829375\n[0.12243174 0.12471001 0.13089607 0.12732807 0.1746774 ] 0.17154575126978253\n[0.12471001 0.13089607 0.12732807 0.1746774 0.17154575] 0.17297450816884682\n[0.13089607 0.12732807 0.1746774 0.17154575 0.17297451] 0.18240035380719374\n[0.12732807 0.1746774 0.17154575 0.17297451 0.18240035] 0.18013752586601497\n[0.1746774 0.17154575 0.17297451 0.18240035 0.18013753] 0.17614090977776484\n[0.17154575 0.17297451 0.18240035 0.18013753 0.17614091] 0.17606366999573195\n[0.17297451 0.18240035 0.18013753 0.17614091 0.17606367] 0.18426541470023847\n[0.18240035 0.18013753 0.17614091 0.17606367 0.18426541] 0.18201805874630997\n[0.18013753 0.17614091 0.17606367 0.18426541 0.18201806] 0.18207985057193624\n[0.17614091 0.17606367 0.18426541 0.18201806 0.18207985] 0.18071288472330974\n[0.17606367 0.18426541 0.18201806 0.18207985 0.18071288] 0.180353774000052\n[0.18426541 0.18201806 0.18207985 0.18071288 0.18035377] 0.17858135355894797\n[0.18201806 0.18207985 0.18071288 0.18035377 0.17858135] 0.17905631232742755\n[0.18207985 0.18071288 0.18035377 0.17858135 0.17905631] 0.17334133201766885\n[0.18071288 0.18035377 0.17858135 0.17905631 0.17334133] 0.17646141301302218\n[0.18035377 0.17858135 0.17905631 0.17334133 0.17646141] 0.17436077776147355\n[0.17858135 0.17905631 0.17334133 0.17646141 0.17436078] 0.1748202885735465\n[0.17905631 0.17334133 0.17646141 0.17436078 0.17482029] 0.18280966496245318\n[0.17334133 0.17646141 0.17436078 0.17482029 0.18280966] 0.18387540728897908\n[0.17646141 0.17436078 0.17482029 0.18280966 0.18387541] 0.18323056828652576\n[0.17436078 0.17482029 0.18280966 0.18387541 0.18323057] 0.18245438599594557\n[0.17482029 0.18280966 0.18387541 0.18323057 0.18245439] 0.18000237485198148\n[0.18280966 0.18387541 0.18323057 0.18245439 0.18000237] 0.17888640883779874\n[0.18387541 0.18323057 0.18245439 0.18000237 0.17888641] 0.18342748787053062\n[0.18323057 0.18245439 0.18000237 0.17888641 0.18342749] 0.182585681997574\n[0.18245439 0.18000237 0.17888641 0.18342749 0.18258568] 0.18800722245294973\n[0.18000237 0.17888641 0.18342749 0.18258568 0.18800722] 0.19282631741450132\n[0.17888641 0.18342749 0.18258568 0.18800722 0.19282632] 0.19464122516204282\n[0.18342749 0.18258568 0.18800722 0.19282632 0.19464123] 0.19328194840813714\n[0.18258568 0.18800722 0.19282632 0.19464123 0.19328195] 0.19130877585108966\n[0.18800722 0.19282632 0.19464123 0.19328195 0.19130878] 0.19083767287020378\n[0.19282632 0.19464123 0.19328195 0.19130878 0.19083767] 0.1927375094944991\n[0.19464123 0.19328195 0.19130878 0.19083767 0.19273751] 0.19241312644080452\n[0.19328195 0.19130878 0.19083767 0.19273751 0.19241313] 0.1972515724328546\n[0.19130878 0.19083767 0.19273751 0.19241313 0.19725157] 0.20044501416699226\n[0.19083767 0.19273751 0.19241313 0.19725157 0.20044501] 0.2029626736993648\n[0.19273751 0.19241313 0.19725157 0.20044501 0.20296267] 0.2001167512948607\n[0.19241313 0.19725157 0.20044501 0.20296267 0.20011675] 0.19728630087760643\n[0.19725157 0.20044501 0.20296267 0.20011675 0.1972863 ] 0.1937453398138357\n[0.20044501 0.20296267 0.20011675 0.1972863 0.19374534] 0.19845248980425764\n[0.20296267 0.20011675 0.1972863 0.19374534 0.19845249] 0.19724769261441763\n[0.20011675 0.1972863 0.19374534 0.19845249 0.19724769] 0.2029974013689281\n[0.1972863 0.19374534 0.19845249 0.19724769 0.2029974 ] 0.20356507190669051\n[0.19374534 0.19845249 0.19724769 0.2029974 0.20356507] 0.20912557342757196\n[0.19845249 0.19724769 0.2029974 0.20356507 0.20912557] 0.21471697086126676\n[0.19724769 0.2029974 0.20356507 0.20912557 0.21471697] 0.21345814070748612\n[0.2029974 0.20356507 0.20912557 0.21471697 0.21345814] 0.21562828091022526\n[0.20356507 0.20912557 0.21471697 0.21345814 0.21562828] 0.21599127145192012\n[0.20912557 0.21471697 0.21345814 0.21562828 0.21599127] 0.21253911830815161\n[0.21471697 0.21345814 0.21562828 0.21599127 0.21253912] 0.21688327775687863\n[0.21345814 0.21562828 0.21599127 0.21253912 0.21688328] 0.2139524272508093\n[0.21562828 0.21599127 0.21253912 0.21688328 0.21395243] 0.21081689829904945\n[0.21599127 0.21253912 0.21688328 0.21395243 0.2108169 ] 0.2155742487214732\n[0.21253912 0.21688328 0.21395243 0.2108169 0.21557425] 0.22389181666516958\n[0.21688328 0.21395243 0.2108169 0.21557425 0.22389182] 0.22480700653256538\n[0.21395243 0.2108169 0.21557425 0.22389182 0.22480701] 0.22056324717265344\n[0.2108169 0.21557425 0.22389182 0.22480701 0.22056325] 0.22054006283502772\n[0.21557425 0.22389182 0.22480701 0.22056325 0.22054006] 0.2177598128497755\n[0.22389182 0.22480701 0.22056325 0.22054006 0.21775981] 0.2279618400035139\n[0.22480701 0.22056325 0.22054006 0.21775981 0.22796184] 0.22766064051225673\n[0.22056325 0.22054006 0.21775981 0.22796184 0.22766064] 0.2305953235502646\n[0.22054006 0.21775981 0.22796184 0.22766064 0.23059532] 0.22839816725314951\n[0.21775981 0.22796184 0.22766064 0.23059532 0.22839817] 0.2334837332612064\n[0.22796184 0.22766064 0.23059532 0.22839817 0.23348373] 0.23399344450528128\n[0.22766064 0.23059532 0.22839817 0.23348373 0.23399344] 0.23209360788098607\n[0.23059532 0.22839817 0.23348373 0.23399344 0.23209361] 0.21808802766022015\n[0.22839817 0.23348373 0.23399344 0.23209361 0.21808803] 0.2093611012747658\n[0.23348373 0.23399344 0.23209361 0.21808803 0.2093611 ] 0.21777140424340005\n[0.23399344 0.23209361 0.21808803 0.2093611 0.2177714 ] 0.21155830485837937\n[0.23209361 0.21808803 0.2093611 0.2177714 0.2115583 ] 0.22255189094176273\n[0.21808803 0.2093611 0.2177714 0.2115583 0.22255189] 0.2401524644150512\n[0.2093611 0.2177714 0.2115583 0.22255189 0.24015246] 0.22179506045686956\n[0.2177714 0.2115583 0.22255189 0.24015246 0.22179506] 0.2236215130867256\n[0.2115583 0.22255189 0.24015246 0.22179506 0.22362151] 0.22556771683589547\n[0.22255189 0.24015246 0.22179506 0.22362151 0.22556772] 0.23203952840573558\n[0.24015246 0.22179506 0.22362151 0.22556772 0.23203953] 0.23878935512920715\n[0.22179506 0.22362151 0.22556772 0.23203953 0.23878936] 0.23704783060091614\n[0.22362151 0.22556772 0.23203953 0.23878936 0.23704783] 0.24370889746607224\n[0.22556772 0.23203953 0.23878936 0.23704783 0.2437089 ] 0.242361212105791\n[0.23203953 0.23878936 0.23704783 0.2437089 0.24236121] 0.24746225010109787\n[0.23878936 0.23704783 0.2437089 0.24236121 0.24746225] 0.24858207190287443\n[0.23704783 0.2437089 0.24236121 0.24746225 0.24858207] 0.25170212964257255\n[0.2437089 0.24236121 0.24746225 0.24858207 0.25170213] 0.24840445606286973\n[0.24236121 0.24746225 0.24858207 0.25170213 0.24840446] 0.24908790332851186\n[0.24746225 0.24858207 0.25170213 0.24840446 0.2490879 ] 0.24896436696375787\n[0.24858207 0.25170213 0.24840446 0.2490879 0.24896437] 0.25233154471946284\n[0.25170213 0.24840446 0.2490879 0.24896437 0.25233154] 0.25522378696234327\n[0.24840446 0.2490879 0.24896437 0.25233154 0.25522379] 0.255289411319908\n[0.2490879 0.24896437 0.25233154 0.25522379 0.25528941] 0.2549187549391474\n[0.24896437 0.25233154 0.25522379 0.25528941 0.25491875] 0.25354405425967863\n[0.25233154 0.25522379 0.25528941 0.25491875 0.25354405] 0.24853958382124822\n[0.25522379 0.25528941 0.25491875 0.25354405 0.24853958] 0.25325827869398465\n[0.25528941 0.25491875 0.25354405 0.24853958 0.25325828] 0.25455188457901523\n[0.25491875 0.25354405 0.24853958 0.25325828 0.25455188] 0.25507318721671457\n[0.25354405 0.24853958 0.25325828 0.25455188 0.25507319] 0.2532119588556082\n[0.24853958 0.25325828 0.25455188 0.25507319 0.25321196] 0.2519685534025794\n[0.25325828 0.25455188 0.25507319 0.25321196 0.25196855] 0.2474969785458497\n[0.25455188 0.25507319 0.25321196 0.25196855 0.24749698] 0.250319716612728\n[0.25507319 0.25321196 0.25196855 0.24749698 0.25031972] 0.2432764019731868\n[0.25321196 0.25196855 0.24749698 0.25031972 0.2432764 ] 0.23699767823060128\n[0.25196855 0.24749698 0.25031972 0.2432764 0.23699768] 0.24445030402540227\n[0.24749698 0.25031972 0.2432764 0.23699768 0.2444503 ] 0.25184884956976406\n[0.25031972 0.2432764 0.23699768 0.2444503 0.25184885] 0.2472112502666541\n[0.2432764 0.23699768 0.2444503 0.25184885 0.24721125] 0.2464041728384492\n[0.23699768 0.2444503 0.25184885 0.24721125 0.24640417] 0.2409517829815704\n[0.2444503 0.25184885 0.24721125 0.24640417 0.24095178] 0.2312556345399682\n[0.25184885 0.24721125 0.24640417 0.24095178 0.23125563] 0.23156071307447393\n[0.24721125 0.24640417 0.24095178 0.23125563 0.23156071] 0.22123126973754423\n[0.24640417 0.24095178 0.23125563 0.23156071 0.22123127] 0.21585108093745517\n[0.24095178 0.23125563 0.23156071 0.22123127 0.21585108] 0.21703387750250716\n[0.23125563 0.23156071 0.22123127 0.21585108 0.21703388] 0.2146991802852163\n[0.23156071 0.22123127 0.21585108 0.21703388 0.21469918] 0.2225767310820221\n[0.22123127 0.21585108 0.21703388 0.21469918 0.22257673] 0.22245305285777275\n[0.21585108 0.21703388 0.21469918 0.22257673 0.22245305] 0.22457124225287947\n[0.21703388 0.21469918 0.22257673 0.22245305 0.22457124] 0.20400769213656042\n[0.21469918 0.22257673 0.22245305 0.22457124 0.20400769] 0.20015010765596938\n[0.22257673 0.22245305 0.22457124 0.20400769 0.20015011] 0.21309893463604293\n[0.22245305 0.22457124 0.20400769 0.20015011 0.21309893] 0.2202420811512297\n[0.22457124 0.20400769 0.20015011 0.21309893 0.22024208] 0.2023146641759507\n[0.20400769 0.20015011 0.21309893 0.22024208 0.20231466] 0.19431343593008407\n[0.20015011 0.21309893 0.22024208 0.20231466 0.19431344] 0.19579771570538382\n[0.21309893 0.22024208 0.20231466 0.19431344 0.19579772] 0.19882816087941782\n[0.22024208 0.20231466 0.19431344 0.19579772 0.19882816] 0.21436675387565698\n[0.20231466 0.19431344 0.19579772 0.19882816 0.21436675] 0.1985652859323592\n[0.19431344 0.19579772 0.19882816 0.21436675 0.19856529] 0.19278278825472103\n[0.19579772 0.19882816 0.21436675 0.19856529 0.19278279] 0.19756805399017982\n[0.19882816 0.21436675 0.19856529 0.19278279 0.19756805] 0.19148402581579557\n[0.21436675 0.19856529 0.19278279 0.19756805 0.19148403] 0.19010795305268371\n[0.19856529 0.19278279 0.19756805 0.19148403 0.19010795] 0.18316584314761972\n[0.19278279 0.19756805 0.19148403 0.19010795 0.18316584] 0.18391571864130016\n[0.19756805 0.19148403 0.19010795 0.18316584 0.18391572] 0.19207156831072789\n[0.19148403 0.19010795 0.18316584 0.18391572 0.19207157] 0.19126756605529824\n[0.19010795 0.18316584 0.18391572 0.19207157 0.19126757] 0.19489326607464066\n[0.18316584 0.18391572 0.19207157 0.19126757 0.19489327] 0.19224937329672623\n[0.18391572 0.19207157 0.19126757 0.19489327 0.19224937] 0.19215659098528995\n[0.19207157 0.19126757 0.19489327 0.19224937 0.19215659] 0.18236184011700154\n[0.19126757 0.19489327 0.19224937 0.19215659 0.18236184] 0.17835736981888026\n[0.19489327 0.19224937 0.19215659 0.18236184 0.17835737] 0.17916134881865478\n[0.19224937 0.19215659 0.18236184 0.17835737 0.17916135] 0.18513716998085109\n[0.19215659 0.18236184 0.17835737 0.17916135 0.18513717] 0.19378777983377485\n[0.18236184 0.17835737 0.17916135 0.18513717 0.19378778] 0.19623839563844095\n[0.17835737 0.17916135 0.18513717 0.19378778 0.1962384 ] 0.19125985370492257\n[0.17916135 0.18513717 0.19378778 0.1962384 0.19125985] 0.186103458723719\n[0.18513717 0.19378778 0.1962384 0.19125985 0.18610346] 0.18660597854408925\n[0.19378778 0.1962384 0.19125985 0.18610346 0.18660598] 0.19296830559109543\n[0.1962384 0.19125985 0.18610346 0.18660598 0.19296831] 0.19367181395990127\n[0.19125985 0.18610346 0.18660598 0.19296831 0.19367181] 0.2007608337133391\n[0.18610346 0.18660598 0.19296831 0.19367181 0.20076083] 0.2054919719118601\n[0.18660598 0.19296831 0.19367181 0.20076083 0.20549197] 0.21139819510024613\n[0.19296831 0.19367181 0.20076083 0.20549197 0.2113982 ] 0.22164133825196733\n[0.19367181 0.20076083 0.20549197 0.2113982 0.22164134] 0.21834035229180862\n[0.20076083 0.20549197 0.2113982 0.22164134 0.21834035] 0.2171034289650077\n[0.20549197 0.2113982 0.22164134 0.21834035 0.21710343] 0.21695656717832087\n[0.2113982 0.22164134 0.21834035 0.21710343 0.21695657] 0.21234905914236227\n[0.22164134 0.21834035 0.21710343 0.21695657 0.21234906] 0.20539918960042394\n[0.21834035 0.21710343 0.21695657 0.21234906 0.20539919] 0.20518272906473808\n[0.21710343 0.21695657 0.21234906 0.20539919 0.20518273] 0.21232587557992497\n[0.21695657 0.21234906 0.20539919 0.20518273 0.21232588] 0.21420442087690858\n[0.21234906 0.20539919 0.20518273 0.21232588 0.21420442] 0.21868049105174367\n[0.20539919 0.20518273 0.21232588 0.21420442 0.21868049] 0.21746675128738016\n[0.20518273 0.21232588 0.21420442 0.21868049 0.21746675] 0.21614480451082865\n[0.21232588 0.21420442 0.21868049 0.21746675 0.2161448 ] 0.2103545479715049\n[0.21420442 0.21868049 0.21746675 0.2161448 0.21035455] 0.21067150316900254\n[0.21868049 0.21746675 0.2161448 0.21035455 0.2106715 ] 0.20488900549136477\n[0.21746675 0.2161448 0.21035455 0.2106715 0.20488901] 0.20390719747474836\n[0.2161448 0.21035455 0.2106715 0.20488901 0.2039072 ] 0.21191613807099094\n[0.21035455 0.2106715 0.20488901 0.2039072 0.21191614] 0.21309893463604293\n[0.2106715 0.20488901 0.2039072 0.21191614 0.21309893] 0.21422760443934605\n[0.20488901 0.2039072 0.21191614 0.21309893 0.2142276 ] 0.22086827842066106\n[0.2039072 0.21191614 0.21309893 0.2142276 0.22086828] 0.2206131631104762\n[0.21191614 0.21309893 0.2142276 0.22086828 0.22061316] 0.23145927268269326\n[0.21309893 0.2142276 0.22086828 0.22061316 0.23145927] 0.22941063475008686\n[0.2142276 0.22086828 0.22061316 0.23145927 0.22941063] 0.23036925842907738\n[0.22086828 0.22061316 0.23145927 0.22941063 0.23036926] 0.22885403545446606\n[0.22061316 0.23145927 0.22941063 0.23036926 0.22885404] 0.23456698166960357\n[0.23145927 0.22941063 0.23036926 0.22885404 0.23456698] 0.2343080338274803\n[0.22941063 0.23036926 0.22885404 0.23456698 0.23430803] 0.23615951496840124\n[0.23036926 0.22885404 0.23456698 0.23430803 0.23615951] 0.23424231489691852\n[0.22885404 0.23456698 0.23430803 0.23615951 0.23424231] 0.2256148886064319\n[0.23456698 0.23430803 0.23615951 0.23424231 0.22561489] 0.22947247386221162\n[0.23430803 0.23615951 0.23424231 0.22561489 0.22947247] 0.2256226009568079\n[0.23615951 0.23424231 0.22561489 0.22947247 0.2256226 ] 0.2318690094164387\n[0.23424231 0.22561489 0.22947247 0.2256226 0.23186901] 0.23626772198058912\n[0.22561489 0.22947247 0.2256226 0.23186901 0.23626772] 0.23619817051808847\n[0.22947247 0.2256226 0.23186901 0.23626772 0.23619817] 0.2345592693192278\n[0.2256226 0.23186901 0.23626772 0.23619817 0.23455927] 0.2276557676773526\n[0.23186901 0.23626772 0.23619817 0.23455927 0.22765577] 0.2441607439894564\n[0.23626772 0.23619817 0.23455927 0.22765577 0.24416074] 0.2398238232509323\n[0.23619817 0.23455927 0.22765577 0.24416074 0.23982382] 0.24389791555370746\n[0.23455927 0.22765577 0.24416074 0.23982382 0.24389792] 0.24485649272138807\n[0.22765577 0.24416074 0.23982382 0.24389792 0.24485649] 0.24282332600084336\n[0.24416074 0.23982382 0.24389792 0.24485649 0.24282333] 0.23947597291580988\n[0.23982382 0.24389792 0.24485649 0.24282333 0.23947597] 0.2406974242553608\n[0.24389792 0.24485649 0.24282333 0.23947597 0.24069742] 0.23683979248827128\n[0.24485649 0.24282333 0.23947597 0.24069742 0.23683979] 0.23823904958900902\n[0.24282333 0.23947597 0.24069742 0.23683979 0.23823905] 0.22600915412811604\n[0.23947597 0.24069742 0.23683979 0.23823905 0.22600915] 0.22173407327690522\n[0.24069742 0.23683979 0.23823905 0.22600915 0.22173407] 0.22720742268041802\n[0.23683979 0.23823905 0.22600915 0.22173407 0.22720742] 0.22096101344559874\n[0.23823905 0.22600915 0.22173407 0.22720742 0.22096101] 0.22196912748392608\n[0.22600915 0.22173407 0.22720742 0.22096101 0.22196913] 0.21963906744423575\n[0.22173407 0.22720742 0.22096101 0.22196913 0.21963907] 0.22382136675919886\n[0.22720742 0.22096101 0.22196913 0.21963907 0.22382137] 0.22313332960245427\n[0.22096101 0.22196913 0.21963907 0.22382137 0.22313333] 0.21915206689761388\n[0.22196913 0.21963907 0.22382137 0.22313333 0.21915207] 0.22846748228315786\n[0.21963907 0.22382137 0.22313333 0.21915207 0.22846748] 0.2283669876213458\n[0.22382137 0.22313333 0.21915207 0.22846748 0.22836699] 0.22746253799060265\n[0.22313333 0.21915207 0.22846748 0.22836699 0.22746254] 0.2341727161479196\n[0.21915207 0.22846748 0.22836699 0.22746254 0.23417272] 0.23780612851763774\n[0.22846748 0.22836699 0.22746254 0.23417272 0.23780613] 0.23597395034552868\n[0.22836699 0.22746254 0.23417272 0.23780613 0.23597395] 0.23510815626447276\n[0.22746254 0.23417272 0.23780613 0.23597395 0.23510816] 0.2344819589950415\n[0.23417272 0.23780613 0.23597395 0.23510816 0.23448196] 0.23265754045980666\n[0.23780613 0.23597395 0.23510816 0.23448196 0.23265754] 0.23084854662532342\n[0.23597395 0.23510816 0.23448196 0.23265754 0.23084855] 0.22554528985743313\n[0.23510816 0.23448196 0.23265754 0.23084855 0.22554529] 0.22415379316875808\n[0.23448196 0.23265754 0.23084855 0.22554529 0.22415379] 0.22600915412811604\n[0.23265754 0.23084855 0.22554529 0.22415379 0.22600915] 0.23043880989157775\n[0.23084855 0.22554529 0.22415379 0.22600915 0.23043881] 0.23091038573744818\n[0.22554529 0.22415379 0.22600915 0.23043881 0.23091039] 0.2340335667116086\n[0.22415379 0.22600915 0.23043881 0.23091039 0.23403357] 0.23720316209714212\n[0.22600915 0.23043881 0.23091039 0.23403357 0.23720316] 0.24001710022418052\n[0.23043881 0.23091039 0.23403357 0.23720316 0.2400171 ] 0.23328369044273942\n[0.23091039 0.23403357 0.23720316 0.2400171 0.23328369] 0.23489940807916285\n[0.23403357 0.23720316 0.2400171 0.23328369 0.23489941] 0.23354656616498673\n[0.23720316 0.2400171 0.23328369 0.23489941 0.23354657] 0.22911686311502646\n[0.2400171 0.23328369 0.23489941 0.23354657 0.22911686] 0.22716876713073073\n[0.23328369 0.23489941 0.23354657 0.22911686 0.22716877] 0.23246426426174685\n[0.23489941 0.23354657 0.22911686 0.22716877 0.23246426] 0.2361904108812141\n[0.23354657 0.22911686 0.22716877 0.23246426 0.23619041] 0.23966924911386975\n[0.22911686 0.22716877 0.23246426 0.23619041 0.23966925] 0.24493380304557436\n[0.22716877 0.23246426 0.23619041 0.23966925 0.2449338 ] 0.2382003940393215\n[0.23246426 0.23619041 0.23966925 0.2449338 0.23820039] 0.2333764727541757\n[0.23619041 0.23966925 0.2449338 0.23820039 0.23337647] 0.2386796822355677\n[0.23966925 0.2449338 0.23820039 0.23337647 0.23867968] 0.22868394281884372\n[0.2449338 0.23820039 0.23337647 0.23867968 0.22868394] 0.2302610041303911\n[0.23820039 0.23337647 0.23867968 0.22868394 0.230261 ] 0.22968893362270879\n[0.23337647 0.23867968 0.22868394 0.230261 0.22968893] 0.23046199345401527\n[0.23867968 0.22868394 0.230261 0.22968893 0.23046199] 0.22343486009920058\n[0.22868394 0.230261 0.22968893 0.23046199 0.22343486] 0.22483406991343982\n[0.230261 0.22968893 0.23046199 0.22343486 0.22483407] 0.22885403545446606\n[0.22968893 0.23046199 0.22343486 0.22483407 0.22885404] 0.23045428110363944\n[0.23046199 0.22343486 0.22483407 0.22885404 0.23045428] 0.21993283830410768\n[0.22343486 0.22483407 0.22885404 0.23045428 0.21993284] 0.22670490286004777\n[0.22483407 0.22885404 0.23045428 0.21993284 0.2267049 ] 0.21772186659756462\n[0.22885404 0.23045428 0.21993284 0.2267049 0.21772187] 0.20505133926530164\n[0.23045428 0.21993284 0.2267049 0.21772187 0.20505134] 0.1963311306633789\n[0.21993284 0.2267049 0.21772187 0.20505134 0.19633113] 0.19998772659553413\n[0.2267049 0.21772187 0.20505134 0.19633113 0.19998773] 0.1938728025083369\n[0.21772187 0.20505134 0.19633113 0.19998773 0.1938728 ] 0.18960548129400026\n[0.20505134 0.19633113 0.19998773 0.1938728 0.18960548] 0.17929276264893498\n[0.19633113 0.19998773 0.1938728 0.18960548 0.17929276] 0.1867683115428376\n[0.19998773 0.1938728 0.18960548 0.17929276 0.18676831] 0.19117478374386213\n[0.1938728 0.18960548 0.17929276 0.18676831 0.19117478] 0.19594462477856903\n[0.18960548 0.17929276 0.18676831 0.19117478 0.19594462] 0.2046570737436175\n[0.17929276 0.18676831 0.19117478 0.19594462 0.20465707] 0.20141016725870917\n[0.18676831 0.19117478 0.19594462 0.20465707 0.20141017] 0.20217551473963988\n[0.19117478 0.19594462 0.20465707 0.20141017 0.20217551] 0.20846053146245971\n[0.19594462 0.20465707 0.20141017 0.20217551 0.20846053] 0.20879295787201893\n[0.20465707 0.20141017 0.20217551 0.20846053 0.20879296] 0.2095505930025738\n[0.20141017 0.20217551 0.20846053 0.20879296 0.20955059] 0.2163303691337012\n[0.20217551 0.20846053 0.20879296 0.20955059 0.21633037] 0.2133463383710402\n[0.20846053 0.20879296 0.20955059 0.21633037 0.21334634] 0.21248820857867326\n[0.20879296 0.20955059 0.21633037 0.21334634 0.21248821] 0.20615682473097882\n[0.20955059 0.21633037 0.21334634 0.21248821 0.20615682] 0.20315732198106776\n[0.21633037 0.21334634 0.21248821 0.20615682 0.20315732] 0.20236107936251255\n[0.21334634 0.21248821 0.20615682 0.20315732 0.20236108] 0.2073705172088439\n[0.21248821 0.20615682 0.20315732 0.20236108 0.20737052] 0.20953512101532382\n[0.20615682 0.20315732 0.20236108 0.20737052 0.20953512] 0.20723136699734448\n[0.20315732 0.20236108 0.20737052 0.20953512 0.20723137] 0.20573937564685732\n[0.20236108 0.20737052 0.20953512 0.20723137 0.20573938] 0.20498174051630272\n[0.20737052 0.20953512 0.20723137 0.20573938 0.20498174] 0.19888224112985695\n[0.20953512 0.20723137 0.20573938 0.20498174 0.19888224] 0.1977381474009907\n[0.20723137 0.20573938 0.20498174 0.19888224 0.19773815] 0.1992533223139151\n[0.20573938 0.20498174 0.19888224 0.19773815 0.19925332] 0.19758352597742976\n[0.20498174 0.19888224 0.19773815 0.19925332 0.19758353] 0.19964758783559908\n[0.19888224 0.19773815 0.19925332 0.19758353 0.19964759] 0.20101594902352363\n[0.19773815 0.19925332 0.19758353 0.19964759 0.20101595] 0.20241515883776293\n[0.19925332 0.19758353 0.19964759 0.20101595 0.20241516] 0.2018663191790165\n[0.19758353 0.19964759 0.20101595 0.20241516 0.20186632] 0.20299498898231946\n[0.19964759 0.20101595 0.20241516 0.20186632 0.20299499] 0.19678723529718747\n[0.20101595 0.20241516 0.20186632 0.20299499 0.19678724] 0.19674857974750012\n[0.20241516 0.20186632 0.20299499 0.19678724 0.19674858] 0.19487003445051634\n[0.20186632 0.20299499 0.19678724 0.19674858 0.19487003] 0.1995007260489122\n[0.20299499 0.19678724 0.19674858 0.19487003 0.19950073] 0.1901852633768699\n[0.19678724 0.19674858 0.19487003 0.19950073 0.19018526] 0.1915149217286087\n[0.19674858 0.19487003 0.19950073 0.19018526 0.19151492] 0.19645485617412664\n[0.19487003 0.19950073 0.19018526 0.19151492 0.19645486] 0.19080370178461545\n[0.19950073 0.19018526 0.19151492 0.19645486 0.1908037 ] 0.19256632849422398\n[0.19018526 0.19151492 0.19645486 0.1908037 0.19256633] 0.18508304321910216\n[0.19151492 0.19645486 0.1908037 0.19256633 0.18508304] 0.18132595340032318\n[0.19645486 0.1908037 0.19256633 0.18508304 0.18132595] 0.16709378713169862\n[0.1908037 0.19256633 0.18508304 0.18132595 0.16709379] 0.17443792219532103\n[0.19256633 0.18508304 0.18132595 0.16709379 0.17443792] 0.1792386599180296\n[0.18508304 0.18132595 0.16709379 0.17443792 0.17923866] 0.1832972329470561\n[0.18132595 0.16709379 0.17443792 0.17923866 0.18329723] 0.1898837801666222\n[0.16709379 0.17443792 0.17923866 0.18329723 0.18988378] 0.19430572357970824\n[0.17443792 0.17923866 0.18329723 0.18988378 0.19430572] 0.1928987549037833\n[0.17923866 0.18329723 0.18988378 0.19430572 0.19289875] 0.19696504028318582\n[0.18329723 0.18988378 0.19430572 0.19289875 0.19696504] 0.19410473503127276\n[0.18988378 0.19430572 0.19289875 0.19696504 0.19410474] 0.19417428571858467\n[0.19430572 0.19289875 0.19696504 0.19410474 0.19417429] 0.19106657673167424\n[0.19289875 0.19696504 0.19410474 0.19417429 0.19106658] 0.18983741304174753\n[0.19696504 0.19410474 0.19417429 0.19106658 0.18983741] 0.1813800328755735\n[0.19410474 0.19417429 0.19106658 0.18983741 0.18138003] 0.17217282372702905\n[0.19417429 0.19106658 0.18983741 0.18138003 0.17217282] 0.17150799493875393\n[0.19106658 0.18983741 0.18138003 0.17217282 0.17150799] 0.1727294462783048\n[0.18983741 0.18138003 0.17217282 0.17150799 0.17272945] 0.16769677758303755\n[0.18138003 0.17217282 0.17150799 0.17272945 0.16769678] 0.16489828741240561\n[0.17217282 0.17150799 0.17272945 0.16769678 0.16489829] 0.16770451318906834\n[0.17150799 0.17272945 0.16769678 0.16489829 0.16770451] 0.17133018995275565\n[0.17272945 0.16769678 0.16489829 0.16770451 0.17133019] 0.1720414098967489\n[0.16769678 0.16489829 0.16770451 0.17133019 0.17204141] 0.1769039859563937\n[0.16489829 0.16770451 0.17133019 0.17204141 0.17690399] 0.17599177668877627\n[0.16770451 0.17133019 0.17204141 0.17690399 0.17599178] 0.18460375502285606\n[0.17133019 0.17204141 0.17690399 0.17599178 0.18460376] 0.19724333993099621\n[0.17204141 0.17690399 0.17599178 0.18460376 0.19724334] 0.2015416051198329\n[0.17690399 0.17599178 0.18460376 0.19724334 0.20154161] 0.19787729683730168\n[0.17599178 0.18460376 0.19724334 0.20154161 0.1978773 ] 0.1850598596566648\n[0.18460376 0.19724334 0.20154161 0.1978773 0.18505986] 0.17838828976253682\n[0.19724334 0.20154161 0.1978773 0.18505986 0.17838829] 0.178898497127251\n[0.20154161 0.1978773 0.18505986 0.17838829 0.1788985 ] 0.19734388187930688\n[0.1978773 0.18505986 0.17838829 0.1788985 0.19734388] 0.19267453473122334\n[0.18505986 0.17838829 0.1788985 0.19734388 0.19267453] 0.1932620772261556\n[0.17838829 0.1788985 0.19734388 0.19267453 0.19326208] 0.1882525913181373\n[0.1788985 0.19734388 0.19267453 0.19326208 0.18825259] 0.19197878599929172\n[0.19734388 0.19267453 0.19326208 0.18825259 0.19197879] 0.19462267877720588\n[0.19267453 0.19326208 0.18825259 0.19197879 0.19462268] 0.19217206297253994\n[0.19326208 0.18825259 0.19197879 0.19462268 0.19217206] 0.19921466753941633\n[0.18825259 0.19197879 0.19462268 0.19217206 0.19921467] 0.19846479127054692\n[0.19197879 0.19462268 0.19217206 0.19921467 0.19846479] 0.2038453583626235\n[0.19462268 0.19217206 0.19921467 0.19846479 0.20384536] 0.20854560142352016\n[0.19217206 0.19921467 0.19846479 0.20384536 0.2085456 ] 0.20377576038881326\n[0.19921467 0.19846479 0.20384536 0.2085456 0.20377576] 0.20134832814658452\n[0.19846479 0.20384536 0.2085456 0.20377576 0.20134833] 0.20379894395125056\n[0.20384536 0.2085456 0.20377576 0.20134833 0.20379894] 0.20076854528852642\n[0.2085456 0.20377576 0.20134833 0.20379894 0.20076855] 0.19532613908432508\n[0.20377576 0.20134833 0.20379894 0.20076855 0.19532614] 0.19855757358198337\n[0.20134833 0.20379894 0.20076855 0.19532614 0.19855757] 0.20457205029386713\n[0.20379894 0.20076855 0.19532614 0.19855757 0.20457205] 0.21354732691947567\n[0.20076855 0.19532614 0.19855757 0.20457205 0.21354733] 0.21580466652608218\n[0.19532614 0.19855757 0.20457205 0.21354733 0.21580467] 0.2258081655796804\n[0.19855757 0.20457205 0.21354733 0.21580467 0.22580817] 0.227586216214852\n[0.20457205 0.21354733 0.21580467 0.22580817 0.22758622] 0.22737746802954215\n[0.21354733 0.21580467 0.22580817 0.22758622 0.22737747] 0.22889269100415346\n[0.21580467 0.22580817 0.22758622 0.22737747 0.22889269] 0.22298258799733053\n[0.22580817 0.22758622 0.22737747 0.22889269 0.22298259] 0.223883205096135\n[0.22758622 0.22737747 0.22889269 0.22298259 0.22388321] 0.21318400459710338\n[0.22737747 0.22889269 0.22298259 0.22388321 0.213184 ] 0.21022315739687988\n[0.22889269 0.22298259 0.22388321 0.213184 0.21022316] 0.21357051125710166\n[0.22298259 0.22388321 0.213184 0.21022316 0.21357051] 0.2072391266342187\n[0.22388321 0.213184 0.21022316 0.21357051 0.20723913] 0.21279745142579518\n[0.213184 0.21022316 0.21357051 0.20723913 0.21279745] 0.20996028244982093\n[0.21022316 0.21357051 0.20723913 0.21279745 0.20996028] 0.21665503668157474\n[0.21357051 0.20723913 0.21279745 0.20996028 0.21665504] 0.21548771132858427\n[0.20723913 0.21279745 0.20996028 0.21665504 0.21548771] 0.21731988950069334\n[0.21279745 0.20996028 0.21665504 0.21548771 0.21731989] 0.21612162094839119\n[0.20996028 0.21665504 0.21548771 0.21731989 0.21612162] 0.22491913987450043\n[0.21665504 0.21548771 0.21731989 0.21612162 0.22491914] 0.21610231720439113\n[0.21548771 0.21731989 0.21612162 0.22491914 0.21610232] 0.2133076828213528\n[0.21731989 0.21612162 0.22491914 0.21610232 0.21330768] 0.20802765767758666\n[0.21612162 0.22491914 0.21610232 0.21330768 0.20802766] 0.21088025135431257\n[0.22491914 0.21610232 0.21330768 0.20802766 0.21088025] 0.20776478273052798\n[0.21610232 0.21330768 0.20802766 0.21088025 0.20776478] 0.20355929985312757\n[0.21330768 0.20802766 0.21088025 0.20776478 0.2035593 ] 0.19812465328580045\n[0.20802766 0.21088025 0.20776478 0.2035593 0.19812465] 0.1990793971463539\n[0.21088025 0.20776478 0.2035593 0.19812465 0.1990794 ] 0.19927650665154092\n[0.20776478 0.2035593 0.19812465 0.1990794 0.19927651] 0.20282489557150843\n[0.2035593 0.19812465 0.1990794 0.19927651 0.2028249 ] 0.20218327360132546\n[0.19812465 0.1990794 0.19927651 0.2028249 0.20218327] 0.2015879722447075\n[0.1990794 0.19927651 0.2028249 0.20218327 0.20158797] 0.20093863869933745\n[0.19927651 0.2028249 0.20218327 0.20158797 0.20093864] 0.1941511021561473\n[0.2028249 0.20218327 0.20158797 0.20093864 0.1941511 ] 0.19580547456706957\n[0.20218327 0.20158797 0.20093864 0.1941511 0.19580547] 0.19678723529718747\n[0.20158797 0.20093864 0.1941511 0.19580547 0.19678724] 0.18924987054681522\n[0.20093864 0.1941511 0.19580547 0.19678724 0.18924987] 0.19800868741192687\n[0.1941511 0.19580547 0.19678724 0.18924987 0.19800869] 0.19691867315831116\n[0.19580547 0.19678724 0.18924987 0.19800869 0.19691867] 0.20108938030446125\n[0.19678724 0.18924987 0.19800869 0.19691867 0.20108938] 0.2069917228992215\n[0.18924987 0.19800869 0.19691867 0.20108938 0.20699172] 0.2209533010952229\n[0.19800869 0.19691867 0.20108938 0.20699172 0.2209533 ] 0.21464103184553499\n[0.19691867 0.20108938 0.20699172 0.2209533 0.21464103] 0.2133309617319755\n[0.20108938 0.20699172 0.2209533 0.21464103 0.21333096] 0.20976511324144673\n[0.20699172 0.2209533 0.21464103 0.21333096 0.20976511] 0.20066440801595783\n[0.2209533 0.21464103 0.21333096 0.20976511 0.20066441] 0.20109851124979014\n[0.21464103 0.21333096 0.20976511 0.20066441 0.20109851] 0.2033310580026352\n[0.21333096 0.20976511 0.20066441 0.20109851 0.20333106] 0.19559464500064003\n[0.20976511 0.20066441 0.20109851 0.20333106 0.19559465] 0.19049389149951226\n[0.20066441 0.20109851 0.20333106 0.19559465 0.19049389] 0.19551714475527157\n[0.20109851 0.20333106 0.19559465 0.19049389 0.19551714] 0.20134653048445547\n[0.20333106 0.19559465 0.19049389 0.19551714 0.20134653] 0.19939308724989416\n[0.19559465 0.19049389 0.19551714 0.20134653 0.19939309] 0.19423030549064793\n[0.19049389 0.19551714 0.20134653 0.19939309 0.19423031] 0.1946799279982286\n[0.19551714 0.20134653 0.19939309 0.19423031 0.19467993] 0.20148610549925236\n[0.20134653 0.19939309 0.19423031 0.19467993 0.20148611] 0.19795896062009785\n[0.19939309 0.19423031 0.19467993 0.20148611 0.19795896] 0.1967574277490268\n[0.19423031 0.19467993 0.20148611 0.19795896 0.19675743] 0.20067987922801925\n[0.19467993 0.20148611 0.19795896 0.19675743 0.20067988] 0.20216051639256274\n[0.20148611 0.19795896 0.19675743 0.20067988 0.20216052] 0.20467211937719312\n[0.19795896 0.19675743 0.20067988 0.20216052 0.20467212] 0.20281164760006168\n[0.19675743 0.20067988 0.20216052 0.20467212 0.20281165] 0.19677294624758682\n[0.20067988 0.20216052 0.20467212 0.20281165 0.19677295] 0.2025635818540865\n[0.20216052 0.20467212 0.20281165 0.19677295 0.20256358] 0.2025558695037107\n[0.20467212 0.20281165 0.19677295 0.20256358 0.20255587] 0.19661013960866555\n[0.20281165 0.19677295 0.20256358 0.20255587 0.19661014] 0.1980674978625863\n[0.19677295 0.20256358 0.20255587 0.19661014 0.1980675 ] 0.20209072849757\n[0.20256358 0.20255587 0.19661014 0.1980675 0.20209073] 0.20296669537729736\n[0.20255587 0.19661014 0.1980675 0.20209073 0.2029667 ] 0.2001682757488187\n[0.19661014 0.1980675 0.20209073 0.2029667 0.20016828] 0.19755589438338558\n[0.1980675 0.20209073 0.2029667 0.20016828 0.19755589] 0.19251716914037617\n[0.20209073 0.2029667 0.20016828 0.19755589 0.19251717] 0.19240863112269915\n[0.2029667 0.20016828 0.19755589 0.19251717 0.19240863] 0.20016051611194433\n[0.20016828 0.19755589 0.19251717 0.19240863 0.20016052] 0.19854811085597682\n[0.19755589 0.19251717 0.19240863 0.20016052 0.19854811] 0.19678070588446103\n[0.19251717 0.19240863 0.20016052 0.19854811 0.19678071] 0.19280398500903576\n[0.19240863 0.20016052 0.19854811 0.19678071 0.19280399] 0.19354042338527608\n[0.20016052 0.19854811 0.19678071 0.19280399 0.19354042] 0.19440087176644325\n[0.19854811 0.19678071 0.19280399 0.19354042 0.19440087] 0.20019150659775445\n[0.19678071 0.19280399 0.19354042 0.19440087 0.20019151] 0.20016051611194433\n[0.19280399 0.19354042 0.19440087 0.20019151 0.20016052] 0.2013232988603311\n[0.19354042 0.19440087 0.20019151 0.20016052 0.2013233 ] 0.20309846346872137\n[0.19440087 0.20019151 0.20016052 0.2013233 0.20309846] 0.2010520022654201\n[0.20019151 0.20016052 0.2013233 0.20309846 0.201052 ] 0.19902872384936762\n[0.20016052 0.2013233 0.20309846 0.201052 0.19902872] 0.19628457361732174\n[0.2013233 0.20309846 0.201052 0.19902872 0.19628457] 0.18840092053665256\n[0.20309846 0.201052 0.19902872 0.19628457 0.18840092] 0.18761797240085373\n[0.201052 0.19902872 0.19628457 0.18840092 0.18761797] 0.1886489862826278\n[0.19902872 0.19628457 0.18840092 0.18761797 0.18864899] 0.1898582780153846\n[0.19628457 0.18840092 0.18761797 0.18864899 0.18985828] 0.18943964599361376\n[0.18840092 0.18761797 0.18864899 0.18985828 0.18943965] 0.19111408028288837\n[0.18761797 0.18864899 0.18985828 0.18943965 0.19111408] 0.184765284151131\n[0.18864899 0.18985828 0.18943965 0.19111408 0.18476528] 0.18774974126746644\n[0.18985828 0.18943965 0.19111408 0.18476528 0.18774974] 0.1950752818845651\n[0.18943965 0.19111408 0.18476528 0.18774974 0.19507528] 0.20780386385870123\n[0.19111408 0.18476528 0.18774974 0.19507528 0.20780386] 0.21908284721579094\n[0.18476528 0.18774974 0.19507528 0.20780386 0.21908285] 0.21840067746079486\n[0.18774974 0.19507528 0.20780386 0.21908285 0.21840068] 0.23361763157062562\n[0.19507528 0.20780386 0.21908285 0.21840068 0.23361763] 0.30577219022872426\n[0.20780386 0.21908285 0.21840068 0.23361763 0.30577219] 0.29809009313447177\n[0.21908285 0.21840068 0.23361763 0.30577219 0.29809009] 0.2975319326092142\n[0.21840068 0.23361763 0.30577219 0.29809009 0.29753193] 0.29737688560716724\n[0.23361763 0.30577219 0.29809009 0.29753193 0.29737689] 0.2835630676413228\n[0.30577219 0.29809009 0.29753193 0.29737689 0.28356307] 0.2675011379127634\n[0.29809009 0.29753193 0.29737689 0.28356307 0.26750114] 0.270369344661046\n[0.29753193 0.29737689 0.28356307 0.26750114 0.27036934] 0.2709429763983652\n[0.29737689 0.28356307 0.26750114 0.27036934 0.27094298] 0.273989461772819\n[0.28356307 0.26750114 0.27036934 0.27094298 0.27398946] 0.27450111253851817\n[0.26750114 0.27036934 0.27094298 0.27398946 0.27450111] 0.269090264258108\n[0.27036934 0.27094298 0.27398946 0.27450111 0.26909026] 0.2734313485340598\n[0.27094298 0.27398946 0.27450111 0.26909026 0.27343135] 0.27191196202202084\n[0.27398946 0.27450111 0.26909026 0.27343135 0.27191196] 0.28317547339186055\n[0.27450111 0.26909026 0.27343135 0.27191196 0.28317547] 0.2823227381362575\n[0.26909026 0.27343135 0.27191196 0.28317547 0.28232274] 0.2766018431382522\n[0.27343135 0.27191196 0.28317547 0.28232274 0.27660184] 0.2753847909934326\n[0.27191196 0.28317547 0.28232274 0.27660184 0.27538479] 0.29635367787357736\n[0.28317547 0.28232274 0.27660184 0.27538479 0.29635368] 0.29540792387404613\n[0.28232274 0.27660184 0.27538479 0.29635368 0.29540792] 0.29299709849502975\n[0.27660184 0.27538479 0.29635368 0.29540792 0.2929971 ] 0.2935164616111047\n[0.27538479 0.29635368 0.29540792 0.2929971 0.29351646] 0.29642341848207165\n[0.29635368 0.29540792 0.2929971 0.29351646 0.29642342] 0.2927490327490544\n[0.29540792 0.2929971 0.29351646 0.29642342 0.29274903] 0.2964466966175058\n[0.2929971 0.29351646 0.29642342 0.29274903 0.2964467 ] 0.2855397890113184\n[0.29351646 0.29642342 0.29274903 0.2964467 0.28553979] 0.2589120353912866\n[0.29642342 0.29274903 0.2964467 0.28553979 0.25891204] 0.24118347829682532\n[0.29274903 0.2964467 0.28553979 0.25891204 0.24118348] 0.235330815207396\n[0.2964467 0.28553979 0.25891204 0.24118348 0.23533082] 0.2714235893917559\n[0.28553979 0.25891204 0.24118348 0.23533082 0.27142359] 0.27839252624520217\n[0.25891204 0.24118348 0.23533082 0.27142359 0.27839253] 0.27278792890174797\n[0.24118348 0.23533082 0.27142359 0.27839253 0.27278793] 0.2633848885338511\n[0.23533082 0.27142359 0.27839253 0.27278793 0.26338489] 0.24752451479170853\n[0.27142359 0.27839253 0.27278793 0.26338489 0.24752451] 0.2603539224331457\n[0.27839253 0.27278793 0.26338489 0.24752451 0.26035392] 0.25408262654675695\n[0.27278793 0.26338489 0.24752451 0.26035392 0.25408263] 0.24978033967998783\n[0.26338489 0.24752451 0.26035392 0.25408263 0.24978034] 0.2606019408926227\n[0.24752451 0.26035392 0.25408263 0.24978034 0.26060194] 0.259098073654332\n[0.26035392 0.25408263 0.24978034 0.26060194 0.25909807] 0.2657879542759931\n[0.25408263 0.24978034 0.26060194 0.25909807 0.26578795] 0.2692143215495335\n[0.24978034 0.26060194 0.25909807 0.26578795 0.26921432] 0.2672530713915995\n[0.26060194 0.25909807 0.26578795 0.26921432 0.26725307] 0.2764778339085139\n[0.25909807 0.26578795 0.26921432 0.26725307 0.27647783] 0.2771289651160128\n[0.26578795 0.26921432 0.26725307 0.27647783 0.27712897] 0.28249330363686453\n[0.26921432 0.26725307 0.27647783 0.27712897 0.2824933 ] 0.27191196202202084\n[0.26725307 0.27647783 0.27712897 0.2824933 0.27191196] 0.2767103803807405\n[0.27647783 0.27712897 0.2824933 0.27191196 0.27671038] 0.26682672701945287\n[0.27712897 0.2824933 0.27191196 0.27671038 0.26682673] 0.26657090163660324\n[0.2824933 0.27191196 0.27671038 0.26682673 0.2665709 ] 0.26923755239846925\n[0.27191196 0.27671038 0.26682673 0.2665709 0.26923755] 0.2585166822801387\n[0.27671038 0.26682673 0.2665709 0.26923755 0.25851668] 0.245276496826802\n[0.26682673 0.2665709 0.26923755 0.25851668 0.2452765 ] 0.24533847779842186\n[0.2665709 0.26923755 0.25851668 0.2452765 0.24533848] 0.25576477241121864\n[0.26923755 0.25851668 0.2452765 0.24533848 0.25576477] 0.2579895595271895\n[0.25851668 0.2452765 0.24533848 0.25576477 0.25798956] 0.27009800000444795\n[0.2452765 0.24533848 0.25576477 0.25798956 0.270098 ] 0.281384742998412\n[0.24533848 0.25576477 0.25798956 0.270098 0.28138474] 0.28446226536998565\n[0.25576477 0.25798956 0.270098 0.28138474 0.28446227] 0.2820746716150936\n[0.25798956 0.270098 0.28138474 0.28446227 0.28207467] 0.27959405911627333\n[0.270098 0.28138474 0.28446227 0.28207467 0.27959406] 0.28304365723874925\n[0.28138474 0.28446227 0.28207467 0.27959406 0.28304366] 0.28541573249508145\n[0.28446227 0.28207467 0.27959406 0.28304366 0.28541573] 0.289780047619969\n[0.28207467 0.27959406 0.28304366 0.28541573 0.28978005] 0.2888963211033676\n[0.27959406 0.28304366 0.28541573 0.28978005 0.28889632] 0.2970978286001935\n[0.28304366 0.28541573 0.28978005 0.28889632 0.29709783] 0.2974544323638457\n[0.28541573 0.28978005 0.28889632 0.29709783 0.29745443] 0.30047763960286533\n[0.28978005 0.28889632 0.29709783 0.29745443 0.30047764] 0.28821419863486997\n[0.28889632 0.29709783 0.29745443 0.30047764 0.2882142 ] 0.2822684687398247\n[0.29709783 0.29745443 0.30047764 0.2882142 0.28226847] 0.2893846937336325\n[0.29745443 0.30047764 0.2882142 0.28226847 0.28938469] 0.3283069253187795\n[0.30047764 0.2882142 0.28226847 0.28938469 0.32830693] 0.3366634798176523\n[0.2882142 0.28226847 0.28938469 0.32830693 0.33666348] 0.3333378909249146\n[0.28226847 0.28938469 0.32830693 0.33666348 0.33333789] 0.336795248684265\n[0.28938469 0.32830693 0.33666348 0.33333789 0.33679525] 0.3398727245445289\n[0.32830693 0.33666348 0.33333789 0.33679525 0.33987272] 0.33513633444392765\n[0.33666348 0.33333789 0.33679525 0.33987272 0.33513633] 0.3431207659053994\n[0.33333789 0.33679525 0.33987272 0.33513633 0.34312077] 0.34393470452700836\n[0.33679525 0.33987272 0.33513633 0.34312077 0.3439347 ] 0.34854708539787105\n[0.33987272 0.33513633 0.34312077 0.3439347 0.34854709] 0.35098118891232166\n[0.33513633 0.34312077 0.3439347 0.34854709 0.35098119] 0.35292691979650714\n[0.34312077 0.3439347 0.34854709 0.35098119 0.35292692] 0.3460510016869893\n[0.3439347 0.34854709 0.35098119 0.35292692 0.346051 ] 0.34870989203679237\n[0.34854709 0.35098119 0.35292692 0.346051 0.34870989] 0.3541982397873824\n[0.35098119 0.35292692 0.346051 0.34870989 0.35419824] 0.3509656696385731\n[0.35292692 0.346051 0.34870989 0.35419824 0.35096567] 0.3399347528026473\n[0.346051 0.34870989 0.35419824 0.35096567 0.33993475] 0.34920602430393144\n[0.34870989 0.35419824 0.35096567 0.33993475 0.34920602] 0.34636880804145875\n[0.35419824 0.35096567 0.33993475 0.34920602 0.34636881] 0.3577640882779112\n[0.35096567 0.33993475 0.34920602 0.34636881 0.35776409] 0.3565315176345317\n[0.33993475 0.34920602 0.34636881 0.35776409 0.35653152] 0.3706321987555342\n[0.34920602 0.34636881 0.35776409 0.35653152 0.3706322 ] 0.370151584986955\n[0.34636881 0.35776409 0.35653152 0.3706322 0.37015158] 0.3641826715294727\n[0.35776409 0.35653152 0.3706322 0.37015158 0.36418267] 0.36408189314867\n[0.35653152 0.3706322 0.37015158 0.36418267 0.36408189] 0.36571753002876173\n[0.3706322 0.37015158 0.36418267 0.36408189 0.36571753] 0.35977955977059095\n[0.37015158 0.36418267 0.36408189 0.36571753 0.35977956] 0.37872516823468316\n[0.36418267 0.36408189 0.36571753 0.35977956 0.37872517] 0.3751128107597844\n[0.36408189 0.36571753 0.35977956 0.37872517 0.37511281] 0.3674849350002777\n[0.36571753 0.35977956 0.37872517 0.37511281 0.36748494] 0.3785468903837006\n[0.35977956 0.37872517 0.37511281 0.36748494 0.37854689] 0.37578722087790617\n[0.37872517 0.37511281 0.36748494 0.37854689 0.37578722] 0.37510505112291015\n[0.37511281 0.36748494 0.37854689 0.37578722 0.37510505] 0.36676401512259715\n[0.36748494 0.37854689 0.37578722 0.37510505 0.36676402] 0.3650973885318841\n[0.37854689 0.37578722 0.37510505 0.36676402 0.36509739] 0.35688812139818404\n[0.37578722 0.37510505 0.36676402 0.36509739 0.35688812] 0.3637873184183247\n[0.37510505 0.36676402 0.36509739 0.35688812 0.36378732] 0.3603997477787786\n[0.36676402 0.36509739 0.35688812 0.36378732 0.36039975] 0.3717872691535451\n[0.36509739 0.35688812 0.36378732 0.36039975 0.37178727] 0.3650741103964499\n[0.35688812 0.36378732 0.36039975 0.37178727 0.36507411] 0.3572292066632764\n[0.36378732 0.36039975 0.37178727 0.36507411 0.35722921] 0.3637873184183247\n[0.36039975 0.37178727 0.36507411 0.35722921 0.36378732] 0.36551597326715635\n[0.37178727 0.36507411 0.35722921 0.36378732 0.36551597] 0.3657562801514461\n[0.36507411 0.35722921 0.36378732 0.36551597 0.36575628] 0.3642756902734011\n[0.35722921 0.36378732 0.36551597 0.36575628 0.36427569] 0.375213589140587\n[0.36378732 0.36551597 0.36575628 0.36427569 0.37521359] 0.38613596873402456\n[0.36551597 0.36575628 0.36427569 0.37521359 0.38613597] 0.38179493174457124\n[0.36575628 0.36427569 0.37521359 0.38613597 0.38179493] 0.37239965101354855\n[0.36427569 0.37521359 0.38613597 0.38179493 0.37239965] 0.35919045604602173\n[0.37521359 0.38613597 0.38179493 0.37239965 0.35919046] 0.3597640877833409\n[0.38613597 0.38179493 0.37239965 0.35919046 0.35976409] 0.36057026676807546\n[0.38179493 0.37239965 0.35919046 0.35976409 0.36057027] 0.3472137844353761\n[0.37239965 0.35919046 0.35976409 0.36057027 0.34721378] 0.33797350341990173\n[0.35919046 0.35976409 0.36057027 0.34721378 0.3379735 ] 0.339182842439157\n[0.35976409 0.36057027 0.34721378 0.3379735 0.33918284] 0.34696571791421216\n[0.36057027 0.34721378 0.3379735 0.33918284 0.34696572] 0.32719065232995137\n[0.34721378 0.3379735 0.33918284 0.34696572 0.32719065] 0.33816730054463284\n[0.3379735 0.33918284 0.34696572 0.32719065 0.3381673 ] 0.32245426145416145\n[0.33918284 0.34696572 0.32719065 0.3381673 0.32245426] 0.3281441186798584\n[0.34696572 0.32719065 0.3381673 0.32245426 0.32814412] 0.32555501544985943\n[0.32719065 0.3381673 0.32245426 0.32814412 0.32555502] 0.33186506145893246\n[0.3381673 0.32245426 0.32814412 0.32555502 0.33186506] 0.3463300579187746\n[0.32245426 0.32814412 0.32555502 0.33186506 0.34633006] 0.33580298492517513\n[0.32814412 0.32555502 0.33186506 0.34633006 0.33580298] 0.3368649892927592\n[0.32555502 0.33186506 0.34633006 0.33580298 0.33686499] 0.3267487886840562\n[0.33186506 0.34633006 0.33580298 0.33686499 0.32674879] 0.35075640130178054\n[0.34633006 0.33580298 0.33686499 0.32674879 0.3507564 ] 0.36005090365200054\n[0.33580298 0.33686499 0.32674879 0.3507564 0.3600509 ] 0.36706635026500534\n[0.33686499 0.32674879 0.3507564 0.3600509 0.36706635] 0.3768725033809245\n[0.32674879 0.3507564 0.3600509 0.36706635 0.3768725 ] 0.3476478876692082\n[0.3507564 0.3600509 0.36706635 0.3768725 0.34764789] 0.3329658159492008\n[0.3600509 0.36706635 0.3768725 0.34764789 0.33296582] 0.31402020670992015\n[0.36706635 0.3768725 0.34764789 0.33296582 0.31402021] 0.3133767870776083\n[0.3768725 0.34764789 0.33296582 0.31402021 0.31337679] 0.3097876604516452\n[0.34764789 0.33296582 0.31402021 0.31337679 0.30978766] 0.31444655108206665\n[0.33296582 0.31402021 0.31337679 0.30978766 0.31444655] 0.3136636029462677\n[0.31402021 0.31337679 0.30978766 0.31444655 0.3136636 ] 0.313113249344383\n[0.31337679 0.30978766 0.31444655 0.3136636 0.31311325] 0.31977985183060986\n[0.30978766 0.31444655 0.3136636 0.31311325 0.31977985] 0.33326815031642043\n[0.31444655 0.3136636 0.31311325 0.31977985 0.33326815] 0.3247022801942565\n[0.3136636 0.31311325 0.31977985 0.33326815 0.32470228] 0.32746194892486236\n[0.31311325 0.31977985 0.33326815 0.32470228 0.32746195] 0.3317642830781298\n[0.31977985 0.33326815 0.32470228 0.32746195 0.33176428] 0.32353949744586974\n[0.33326815 0.32470228 0.32746195 0.33176428 0.3235395 ] 0.3264154638310268\n[0.32470228 0.32746195 0.33176428 0.3235395 0.32641546] 0.33121388218974646\n[0.32746195 0.33176428 0.3235395 0.32641546 0.33121388] 0.33068675943679726\n[0.33176428 0.3235395 0.32641546 0.33121388 0.33068676] 0.3250278934720988\n[0.3235395 0.32641546 0.33121388 0.33068676 0.32502789] 0.34133784243532383\n[0.32641546 0.33121388 0.33068676 0.32502789 0.34133784] 0.3413688329211336\n[0.33121388 0.33068676 0.32502789 0.34133784 0.34136883] 0.33638437629936846\n[0.33068676 0.32502789 0.34133784 0.34136883 0.33638438] 0.33519836270204606\n[0.32502789 0.34133784 0.34136883 0.33638438 0.33519836] 0.3230046150560464\n[0.34133784 0.34136883 0.33638438 0.33519836 0.32300462] 0.32208213919194917\n[0.34136883 0.33638438 0.33519836 0.32300462 0.32208214] 0.33081852830340996\n[0.33638438 0.33519836 0.32300462 0.32208214 0.33081853] 0.33669447030346233\n[0.33519836 0.32300462 0.32208214 0.33081853 0.33669447] 0.3475471092884056\n[0.32300462 0.32208214 0.33081853 0.33669447 0.34754711] 0.35039203790125406\n[0.32208214 0.33081853 0.33669447 0.34754711 0.35039204] 0.34871765167366653\n[0.33081853 0.33669447 0.34754711 0.35039204 0.34871765] 0.35473312217720576\n[0.33669447 0.34754711 0.35039204 0.34871765 0.35473312] 0.3560431922907652\n[0.34754711 0.35039204 0.34871765 0.35473312 0.35604319] 0.3559036172759683\n[0.35039204 0.34871765 0.35473312 0.35604319 0.35590362] 0.35938425317075284\n[0.34871765 0.35473312 0.35604319 0.35590362 0.35938425] 0.3583454796521046\n[0.35473312 0.35604319 0.35590362 0.35938425 0.35834548] 0.35626022103962074\n[0.35604319 0.35590362 0.35938425 0.35834548 0.35626022] 0.3541206930307039\n[0.35590362 0.35938425 0.35834548 0.35626022 0.35412069] 0.352748641170336\n[0.35938425 0.35834548 0.35626022 0.35412069 0.35274864] 0.36146175292155114\n[0.35834548 0.35626022 0.35412069 0.35274864 0.36146175] 0.3659268456520528\n[0.35626022 0.35412069 0.35274864 0.36146175 0.36592685] 0.36160128064984953\n[0.35412069 0.35274864 0.36146175 0.36592685 0.36160128] 0.36544618537216367\n[0.35274864 0.36146175 0.36592685 0.36160128 0.36544619] 0.36186481838307494\n[0.36146175 0.36592685 0.36160128 0.36544619 0.36186482] 0.35605866427801525\n[0.36592685 0.36160128 0.36544619 0.36186482 0.35605866] 0.3621749123871686\n[0.36160128 0.36544619 0.36186482 0.35605866 0.36217491] 0.3579811635380765\n[0.36544619 0.36186482 0.35605866 0.36217491 0.35798116] 0.3571051966583494\n[0.36186482 0.35605866 0.36217491 0.35798116 0.3571052 ] 0.35474083452758154\n[0.35605866 0.36217491 0.35798116 0.3571052 0.35474083] 0.3601594416696774\n[0.36217491 0.35798116 0.3571052 0.35474083 0.36015944] 0.36684927500484\n[0.35798116 0.3571052 0.35474083 0.36015944 0.36684928] 0.3679965857659768\n[0.3571052 0.35474083 0.36015944 0.36684928 0.36799659] 0.37249266975747697\n[0.35474083 0.36015944 0.36684928 0.36799659 0.37249267] 0.37839184260646497\n[0.36015944 0.36684928 0.36799659 0.37249267 0.37839184] 0.3685624586416102\n[0.36684928 0.36799659 0.37249267 0.37839184 0.36856246] 0.36758571338108037\n[0.36799659 0.37249267 0.37839184 0.36856246 0.36758571] 0.37260120777515404\n[0.37249267 0.37839184 0.36856246 0.36758571 0.37260121] 0.3413068519495137\n[0.37839184 0.36856246 0.36758571 0.37260121 0.34130685] 0.344702180675557\n[0.36856246 0.36758571 0.37260121 0.34130685 0.34470218] 0.33306659433000363\n[0.36758571 0.37260121 0.34130685 0.34470218 0.33306659] 0.33128367008473913\n[0.37260121 0.34130685 0.34470218 0.33306659 0.33128367] 0.3197953711043583\n[0.34130685 0.34470218 0.33306659 0.33128367 0.31979537] 0.321337988465333\n[0.34470218 0.33306659 0.33128367 0.31979537 0.32133799] 0.3253689779620026\n[0.33306659 0.33128367 0.31979537 0.32133799 0.32536898] 0.32083409656131956\n[0.33128367 0.31979537 0.32133799 0.32536898 0.3208341 ] 0.323423247077817\n[0.31979537 0.32133799 0.32536898 0.3208341 0.32342325] 0.327865062448073\n[0.32133799 0.32536898 0.3208341 0.32342325 0.32786506] 0.3353766405530286\n[0.32536898 0.3208341 0.32342325 0.32786506 0.33537664] 0.33675649856158074\n[0.3208341 0.32342325 0.32786506 0.33537664 0.3367565 ] 0.34472541229968123\n[0.32342325 0.32786506 0.33537664 0.3367565 0.34472541] 0.3386091634153394\n[0.32786506 0.33537664 0.3367565 0.34472541 0.33860916] 0.337074305691239\n[0.33537664 0.3367565 0.34472541 0.33860916 0.33707431] 0.3351518529424875\n[0.3367565 0.34472541 0.33860916 0.33707431 0.33515185] 0.3395393989163108\n[0.34472541 0.33860916 0.33707431 0.33515185 0.3395394 ] 0.33158595716546024\n[0.33860916 0.33707431 0.33515185 0.3395394 0.33158596] 0.3318960519447425\n[0.33707431 0.33515185 0.3395394 0.33158596 0.33189605] 0.32700461406690584\n[0.33515185 0.3395394 0.33158596 0.33189605 0.32700461] 0.3343068765485703\n[0.3395394 0.33158596 0.33189605 0.32700461 0.33430688] 0.3300433398044854\n[0.33158596 0.33189605 0.32700461 0.33430688 0.33004334] 0.3423301061944135\n[0.33189605 0.32700461 0.33430688 0.33004334 0.34233011] 0.346345577192523\n[0.32700461 0.33430688 0.33004334 0.34233011 0.34634558] 0.34545409103904734\n[0.33430688 0.33004334 0.34233011 0.34634558 0.34545409] 0.35207418376571575\n[0.33004334 0.34233011 0.34634558 0.34545409 0.35207418] 0.35444625902204774\n[0.34233011 0.34634558 0.34545409 0.35207418 0.35444626] 0.35322925416372675\n[0.34634558 0.34545409 0.35207418 0.35444626 0.35322925] 0.35032229729275977\n[0.34545409 0.35207418 0.35444626 0.35322925 0.3503223 ] 0.3440742803169936\n[0.35207418 0.35444626 0.35322925 0.3503223 0.34407428] 0.33958590867586924\n[0.35444626 0.35322925 0.3503223 0.34407428 0.33958591] 0.3396634554325477\n[0.35322925 0.3503223 0.34407428 0.33958591 0.33966346] 0.34867890155098236\n[0.3503223 0.34407428 0.33958591 0.33966346 0.3486789 ] 0.34891144879839764\n[0.34407428 0.33958591 0.33966346 0.3486789 0.34891145] 0.34180293615496593\n[0.33958591 0.33966346 0.3486789 0.34891145 0.34180294] 0.340988997533357\n[0.33966346 0.3486789 0.34891145 0.34180294 0.340989 ] 0.3409192577000514\n[0.3486789 0.34891145 0.34180294 0.340989 0.34091926] 0.34142310154237787\n[0.34891145 0.34180294 0.340989 0.34091926 0.3414231 ] 0.3347874895419608\n[0.34180294 0.340989 0.34091926 0.3414231 0.33478749] 0.32033796506936896\n[0.340989 0.34091926 0.3414231 0.33478749 0.32033797] 0.32188062971684217\n[0.34091926 0.3414231 0.33478749 0.32033797 0.32188063] 0.32360928456567384\n[0.3414231 0.33478749 0.32033797 0.32188063 0.32360928] 0.32478758658780915\n[0.33478749 0.32033797 0.32188063 0.32360928 0.32478759] 0.3282061469379768\n[0.32033797 0.32188063 0.32360928 0.32478759 0.32820615] 0.3075473548371145\n[0.32188063 0.32360928 0.32478759 0.32820615 0.30754735] 0.30215207311695125\n[0.32360928 0.32478759 0.32820615 0.30754735 0.30215207] 0.31128376882825015\n[0.32478759 0.32820615 0.30754735 0.30215207 0.31128377] 0.31443879144519227\n[0.32820615 0.30754735 0.30215207 0.31128377 0.31443879] 0.32063254057490276\n[0.30754735 0.30215207 0.31128377 0.31443879 0.32063254] 0.32614416646092703\n[0.30215207 0.31128377 0.31443879 0.32063254 0.32614417] 0.32284185570362367\n[0.31128377 0.31443879 0.32063254 0.32614417 0.32284186] 0.3250278934720988\n[0.31443879 0.32063254 0.32614417 0.32284186 0.32502789] 0.3231596620580931\n[0.32063254 0.32614417 0.32284186 0.32502789 0.32315966] 0.33112086344581804\n[0.32614417 0.32284186 0.32502789 0.32315966 0.33112086] 0.33845416369979087\n[0.32284186 0.32502789 0.32315966 0.33112086 0.33845416] 0.3427564505665601\n[0.32502789 0.32315966 0.33112086 0.33845416 0.34275645] 0.33991923352889875\n[0.32315966 0.33112086 0.33845416 0.34275645 0.33991923] 0.3429967566756611\n[0.33112086 0.33845416 0.34275645 0.33991923 0.34299676] 0.3421440214200582\n[0.33845416 0.34275645 0.33991923 0.34299676 0.34214402] 0.3529424382950671\n[0.34275645 0.33991923 0.34299676 0.34214402 0.35294244] 0.3554075322953276\n[0.33991923 0.34299676 0.34214402 0.35294244 0.35540753] 0.35868656414200817\n[0.34299676 0.34214402 0.35294244 0.35540753 0.35868656] 0.3567020839103271\n[0.34214402 0.35294244 0.35540753 0.35868656 0.35670208] 0.35988809701307944\n[0.35294244 0.35540753 0.35868656 0.35670208 0.3598881 ] 0.3575858104269286\n[0.35540753 0.35868656 0.35670208 0.3598881 0.35758581] 0.356539277271406\n[0.35868656 0.35670208 0.3598881 0.35758581 0.35653928] 0.3591361874247776\n[0.35670208 0.3598881 0.35758581 0.35653928 0.35913619] 0.3623454313764657\n[0.3598881 0.35758581 0.35653928 0.35913619 0.36234543] 0.38008174810780104\n[0.35758581 0.35653928 0.35913619 0.36234543 0.38008175] 0.3832522899984918\n[0.35653928 0.35913619 0.36234543 0.38008175 0.38325229] 0.3818492003658155\n[0.35913619 0.36234543 0.38008175 0.38325229 0.3818492 ] 0.3834848372459072\n[0.36234543 0.38008175 0.38325229 0.3818492 0.38348484] 0.3822677851010876\n[0.38008175 0.38325229 0.3818492 0.38348484 0.38226779] 0.39049252422203784\n[0.38325229 0.3818492 0.38348484 0.38226779 0.39049252] 0.390135967744884\n[0.3818492 0.38348484 0.38226779 0.39049252 0.39013597] 0.39207393899219534\n[0.38348484 0.38226779 0.39049252 0.39013597 0.39207394] 0.39239950498353915\n[0.38226779 0.39049252 0.39013597 0.39207394 0.3923995 ] 0.39253127385015185\n[0.39049252 0.39013597 0.39207394 0.3923995 0.39253127] 0.3912677127209624\n[0.39013597 0.39207394 0.3923995 0.39253127 0.39126771] 0.3906630897226445\n[0.39207394 0.3923995 0.39253127 0.39126771 0.39066309] 0.3865546007557954\n[0.3923995 0.39253127 0.39126771 0.39066309 0.3865546 ] 0.3887018403398992\n[0.39253127 0.39126771 0.39066309 0.3865546 0.38870184] 0.38683365698758054\n[0.39126771 0.39066309 0.3865546 0.38870184 0.38683366] 0.38522125173161315\n[0.39066309 0.3865546 0.38870184 0.38683366 0.38522125] 0.38268641712285845\n[0.3865546 0.38870184 0.38683366 0.38522125 0.38268642] 0.3826321485016142\n[0.38870184 0.38683366 0.38522125 0.38268642 0.38263215] 0.38074068701386155\n[0.38683366 0.38522125 0.38268642 0.38263215 0.38074069] 0.3805623611011918\n[0.38522125 0.38268642 0.38263215 0.38074069 0.38056236] 0.3806631394819945\n[0.38268642 0.38263215 0.38074069 0.38056236 0.38066314] 0.38268641712285845\n[0.38263215 0.38074069 0.38056236 0.38066314 0.38268642] 0.3803143426417148\n[0.38074069 0.38056236 0.38066314 0.38268642 0.38031434] 0.37873292787155743\n[0.38056236 0.38066314 0.38268642 0.38031434 0.37873293] 0.38007403575742527\n[0.38066314 0.38268642 0.38031434 0.37873293 0.38007404] 0.38215153550822345\n[0.38268642 0.38031434 0.37873293 0.38007404 0.38215154] 0.3888336564930104\n[0.38031434 0.37873293 0.38007404 0.38215154 0.38883366] 0.38904292560499143\n[0.37873293 0.38007404 0.38215154 0.38883366 0.38904293] 0.3851437514862447\n[0.38007404 0.38215154 0.38883366 0.38904293 0.38514375] 0.37300427323667773\n[0.38215154 0.38883366 0.38904293 0.38514375 0.37300427] 0.3802600740204707\n[0.38883366 0.38904293 0.38514375 0.37300427 0.38026007] 0.37302755137211174\n[0.38904293 0.38514375 0.37300427 0.38026007 0.37302755] 0.37519806986683857\n[0.38514375 0.37300427 0.38026007 0.37302755 0.37519807] 0.38238408275563884\n[0.37300427 0.38026007 0.37302755 0.37519807 0.38238408] 0.3801515360027937\n[0.38026007 0.37302755 0.37519807 0.38238408 0.38015154] 0.3776864420025333\n[0.37302755 0.37519807 0.38238408 0.38015154 0.37768644] 0.3821127380990408\n[0.37519807 0.38238408 0.38015154 0.37768644 0.38211274] 0.38584139322849076\n[0.38238408 0.38015154 0.37768644 0.38211274 0.38584139] 0.3943607543662846\n[0.38015154 0.37768644 0.38211274 0.38584139 0.39436075] 0.39412044748199504\n[0.37768644 0.38211274 0.38584139 0.39436075 0.39412045] 0.3842833038802659\n[0.38211274 0.38584139 0.39436075 0.39412045 0.3842833 ] 0.3911049533685397\n[0.38584139 0.39436075 0.39412045 0.3842833 0.39110495] 0.3899809207428373\n[0.39436075 0.39412045 0.3842833 0.39110495 0.38998092] 0.3849034453771435\n[0.39412045 0.3842833 0.39110495 0.38998092 0.38490345] 0.38667085034865945\n[0.3842833 0.39110495 0.38998092 0.38490345 0.38667085] 0.38300422425251657\n[0.39110495 0.38998092 0.38490345 0.38667085 0.38300422] 0.38600419986741186\n[0.38998092 0.38490345 0.38667085 0.38300422 0.3860042 ] 0.38603519035322187\n[0.38490345 0.38667085 0.38300422 0.3860042 0.38603519] 0.38633752472044125\n[0.38667085 0.38300422 0.3860042 0.38603519 0.38633752] 0.38495771399838774\n[0.38300422 0.3860042 0.38603519 0.38633752 0.38495771] 0.3933762494688804\n[0.3860042 0.38603519 0.38633752 0.38495771 0.39337625] 0.39115146235290965\n[0.38603519 0.38633752 0.38495771 0.39337625 0.39115146] 0.393531297246116\n[0.38633752 0.38495771 0.39337625 0.39115146 0.3935313 ] 0.3873685386022153\n[0.38495771 0.39337625 0.39115146 0.3935313 0.38736854] 0.38763212362193933\n[0.39337625 0.39115146 0.3935313 0.38736854 0.38763212] 0.38874063774908185\n[0.39115146 0.3935313 0.38736854 0.38763212 0.38874064] 0.40060101248036495\n[0.3935313 0.38736854 0.38763212 0.38874064 0.40060101] 0.405438180961769\n[0.38736854 0.38763212 0.38874064 0.40060101 0.40543818] 0.40192655458117443\n[0.38763212 0.38874064 0.40060101 0.40543818 0.40192655] 0.4037870255831173\n[0.38874064 0.40060101 0.40543818 0.40192655 0.40378703] 0.41443810780645507\n[0.40060101 0.40543818 0.40192655 0.40378703 0.41443811] 0.41022108082192876\n[0.40543818 0.40192655 0.40378703 0.41443811 0.41022108] 0.4035544783357019\n[0.40192655 0.40378703 0.41443811 0.41022108 0.40355448] 0.40067075308885924\n[0.40378703 0.41443811 0.41022108 0.40355448 0.40067075] 0.4006862715874192\n[0.41443811 0.41022108 0.40355448 0.40067075 0.40068627] 0.3922909669658622\n[0.41022108 0.40355448 0.40067075 0.40068627 0.39229097] 0.39157004708818177\n[0.40355448 0.40067075 0.40068627 0.39229097 0.39157005] 0.38001200749930686\n[0.40067075 0.40068627 0.39229097 0.39157005 0.38001201] 0.3749190136350532\n[0.40068627 0.39229097 0.39157005 0.38001201 0.37491901] 0.37483375452799905\n[0.39229097 0.39157005 0.38001201 0.37491901 0.37483375] 0.39072511875595173\n[0.39157005 0.38001201 0.37491901 0.37483375 0.39072512] 0.39691886711047353\n[0.38001201 0.37491901 0.37483375 0.39072512 0.39691887] 0.39288787761380417\n[0.37491901 0.37483375 0.39072512 0.39691887 0.39288788] 0.3752523392632714\n[0.37483375 0.39072512 0.39691887 0.39288788 0.37525234] 0.368632246536603\n[0.39072512 0.39691887 0.39288788 0.37525234 0.36863225] 0.3547253625403316\n[0.39691887 0.39288788 0.37525234 0.36863225 0.35472536] 0.3720973158711406\n[0.39288788 0.37525234 0.36863225 0.35472536 0.37209732] 0.3767406872278134\n[0.37525234 0.36863225 0.35472536 0.37209732 0.37674069] 0.3819732095955538\n[0.36863225 0.35472536 0.37209732 0.37674069 0.38197321] 0.37368644299167386\n[0.35472536 0.37209732 0.37674069 0.38197321 0.37368644] 0.3803996017487691\n[0.37209732 0.37674069 0.38197321 0.37368644 0.3803996 ] 0.37967868264627735\n[0.37674069 0.38197321 0.37368644 0.3803996 0.37967868] 0.3740352871184518\n[0.38197321 0.37368644 0.3803996 0.37967868 0.37403529] 0.37457016950827515\n[0.37368644 0.3803996 0.37967868 0.37403529 0.37457017] 0.37965540373565465\n[0.3803996 0.37967868 0.37403529 0.37457017 0.3796554 ] 0.3816709225148327\n[0.37967868 0.37403529 0.37457017 0.3796554 0.38167092] 0.37174847174436254\n[0.37403529 0.37457017 0.3796554 0.38167092 0.37174847] 0.36390356801118884\n[0.37457017 0.3796554 0.38167092 0.37174847 0.36390357] 0.36590356751661857\n[0.3796554 0.38167092 0.37174847 0.36390357 0.36590357] 0.3752213487774614\n[0.38167092 0.37174847 0.36390357 0.36590357 0.37522135] 0.3725779288645311\n[0.37174847 0.36390357 0.36590357 0.37522135 0.37257793] 0.3819422191097438\n[0.36390357 0.36590357 0.37522135 0.37257793 0.38194222] 0.3859964402305376\n[0.36590357 0.37522135 0.37257793 0.38194222 0.38599644] 0.3959731123357537\n[0.37522135 0.37257793 0.38194222 0.38599644 0.39597311] 0.39595764112369214\n[0.37257793 0.38194222 0.38599644 0.39597311 0.39595764] 0.4012521444630526\n[0.38194222 0.38599644 0.39597311 0.39595764 0.40125214] 0.4020041013378529\n[0.38599644 0.39597311 0.39595764 0.40125214 0.4020041 ] 0.40260872433617056\n[0.39597311 0.39595764 0.40125214 0.4020041 0.40260872] 0.39714365472101465\n[0.39595764 0.40125214 0.4020041 0.40260872 0.39714365] 0.39977931422188207\n[0.40125214 0.4020041 0.40260872 0.39714365 0.39977931] 0.40150021020902804\n[0.4020041 0.40260872 0.39714365 0.39977931 0.40150021] 0.400058371228856\n[0.40260872 0.39714365 0.39977931 0.40150021 0.40005837] 0.39750025848466697\n[0.39714365 0.39977931 0.40150021 0.40005837 0.39750026] 0.39645372532914447\n[0.39977931 0.40150021 0.40005837 0.39750026 0.39645373] 0.3977250460952081\n[0.40150021 0.40005837 0.39750026 0.39645373 0.39772505] 0.3926863208521987\n[0.40005837 0.39750026 0.39645373 0.39772505 0.39268632] 0.3909343870927443\n[0.39750026 0.39645373 0.39772505 0.39268632 0.39093439] 0.3824305917400088\n[0.39645373 0.39772505 0.39268632 0.39093439 0.38243059] 0.393531297246116\n[0.39772505 0.39268632 0.39093439 0.38243059 0.3935313 ] 0.39412044748199504\n[0.39268632 0.39093439 0.38243059 0.3935313 0.39412045] 0.39963978649358367\n[0.39093439 0.38243059 0.3935313 0.39412045 0.39963979] 0.40904282608629206\n[0.38243059 0.3935313 0.39412045 0.39963979 0.40904283] 0.4094304203357543\n[0.3935313 0.39412045 0.39963979 0.40904283 0.40943042] 0.40798853406908375\n[0.39412045 0.39963979 0.40904283 0.40943042 0.40798853] 0.4104071183097857\n[0.39963979 0.40904283 0.40943042 0.40798853 0.41040712] 0.40920558543871455\n[0.40904283 0.40943042 0.40798853 0.41040712 0.40920559] 0.41038388746084997\n[0.40943042 0.40798853 0.41040712 0.40920559 0.41038389] 0.40784900556559667\n[0.40798853 0.41040712 0.40920559 0.41038389 0.40784901] 0.40898079782817354\n[0.41040712 0.40920559 0.41038389 0.40784901 0.4089808 ] 0.4059614238962811\n[0.40920559 0.41038389 0.40784901 0.4089808 0.40596142] 0.4081668599817533\n[0.41038389 0.40784901 0.4089808 0.40596142 0.40816686] 0.4192442865772378\n[0.40784901 0.4089808 0.40596142 0.40816686 0.41924429] 0.42277914380676795\n[0.4089808 0.40596142 0.40816686 0.41924429 0.42277914] 0.4319263587918153\n[0.40596142 0.40816686 0.41924429 0.42277914 0.43192636] 0.4291977270583295\n[0.40816686 0.41924429 0.42277914 0.43192636 0.42919773] 0.42234504057293587\n[0.41924429 0.42277914 0.43192636 0.42919773 0.42234504] 0.40607384095720667\n[0.42277914 0.43192636 0.42919773 0.42234504 0.40607384] 0.40178702607768757\n[0.43192636 0.42919773 0.42234504 0.40607384 0.40178703] 0.40093821715183176\n[0.42919773 0.42234504 0.40607384 0.40178703 0.40093822] 0.4031358936004297\n[0.42234504 0.40607384 0.40178703 0.40093822 0.40313589] 0.4054304213248948\n[0.40607384 0.40178703 0.40093822 0.40313589 0.40543042] 0.40531417173203055\n[0.40178703 0.40093822 0.40313589 0.40543042 0.40531417] 0.4096784395704196\n[0.40093822 0.40313589 0.40543042 0.40531417 0.40967844] 0.4107714817103122\n[0.40313589 0.40543042 0.40531417 0.40967844 0.41077148] 0.4116861987127237\n[0.40543042 0.40531417 0.40967844 0.41077148 0.4116862 ] 0.41487221181547596\n[0.40531417 0.40967844 0.41077148 0.4116862 0.41487221] 0.41919001718080495\n[0.40967844 0.41077148 0.4116862 0.41487221 0.41919002] 0.4201280123186506\n[0.41077148 0.4116862 0.41487221 0.41919002 0.42012801] 0.41898846041919957\n[0.4116862 0.41487221 0.41919002 0.42012801 0.41898846] 0.4230039314173092\n[0.41487221 0.41919002 0.42012801 0.41898846 0.42300393] 0.426034944804513\n[0.41919002 0.42012801 0.41898846 0.42300393 0.42603494] 0.42881784515924304\n[0.42012801 0.41898846 0.42300393 0.42603494 0.42881785] 0.4281202041921853\n[0.41898846 0.42300393 0.42603494 0.42881785 0.4281202 ] 0.4285620670628919\n[0.42300393 0.42603494 0.42881785 0.4281202 0.42856207] 0.42647680845040803\n[0.42603494 0.42881785 0.4281202 0.42856207 0.42647681] 0.42697293994235863\n[0.42881785 0.4281202 0.42856207 0.42647681 0.42697294] 0.4222675403275673\n[0.4281202 0.42856207 0.42647681 0.42697294 0.42226754] 0.4315930331635973\n[0.42856207 0.42647681 0.42697294 0.42226754 0.43159303] 0.42801942581138275\n[0.42647681 0.42697294 0.42226754 0.43159303 0.42801943] 0.4268178929403117\n[0.42697294 0.42226754 0.43159303 0.42801943 0.42681789] 0.42581015719397186\n[0.42226754 0.43159303 0.42801943 0.42681789 0.42581016] 0.42901164228397415\n[0.43159303 0.42801943 0.42681789 0.42581016 0.42901164] 0.4316938115443999\n[0.42801943 0.42681789 0.42581016 0.42901164 0.43169381] 0.4342596839254631\n[0.42681789 0.42581016 0.42901164 0.43169381 0.43425968] 0.43780230079186766\n[0.42581016 0.42901164 0.43169381 0.43425968 0.4378023 ] 0.43957746540025766\n[0.42901164 0.43169381 0.43425968 0.4378023 0.43957747] 0.4396394936583762\n[0.43169381 0.43425968 0.4378023 0.43957747 0.43963949] 0.44086430466488136\n[0.43425968 0.4378023 0.43957747 0.43963949 0.4408643 ] 0.44208911567138665\n[0.4378023 0.43957747 0.43963949 0.4408643 0.44208912] 0.4446782189013855\n[0.43957747 0.43963949 0.4408643 0.44208912 0.44467822] 0.44179454016585284\n[0.43963949 0.4408643 0.44208912 0.44467822 0.44179454] 0.42788765694477005\n[0.4408643 0.44208912 0.44467822 0.44179454 0.42788766] 0.4272132468266482\n[0.44208912 0.44467822 0.44179454 0.42788766 0.42721325] 0.41790322520267975\n[0.44467822 0.44179454 0.42788766 0.42721325 0.41790323] 0.41546136282654356\n[0.44179454 0.42788766 0.42721325 0.41790323 0.41546136] 0.4193993335792845\n[0.42788766 0.42721325 0.41790323 0.41546136 0.41939933] 0.4204923284326787\n[0.42721325 0.41790323 0.41546136 0.41939933 0.42049233] 0.42862404803451193\n[0.41790323 0.41546136 0.41939933 0.42049233 0.42862405] 0.4286938359295045\n[0.41546136 0.41939933 0.42049233 0.42862405 0.42869384] 0.4271899686912142\n[0.41939933 0.42049233 0.42862405 0.42869384 0.42718997] 0.4341589055446604\n[0.42049233 0.42862405 0.42869384 0.42718997 0.43415891] 0.4310736700475223\n[0.42862405 0.42869384 0.42718997 0.43415891 0.43107367] 0.42862404803451193\n[0.42869384 0.42718997 0.43415891 0.43107367 0.42862405] 0.4258876574393403\n[0.42718997 0.43415891 0.43107367 0.42862405 0.42588766] 0.42339928530364557\n[0.43415891 0.43107367 0.42862405 0.42588766 0.42339929] 0.42344579428801554\n[0.43107367 0.42862405 0.42588766 0.42339929 0.42344579] 0.42237603105874577\n[0.42862405 0.42588766 0.42339929 0.42344579 0.42237603] 0.423127987933546\n[0.42588766 0.42339929 0.42344579 0.42237603 0.42312799] 0.422538837697667\n[0.42339929 0.42344579 0.42237603 0.42312799 0.42253884] 0.43308914154020206\n[0.42344579 0.42237603 0.42312799 0.42253884 0.43308914] 0.4328178441701025\n[0.42237603 0.42312799 0.42253884 0.43308914 0.43281784] 0.433895367811435\n[0.42312799 0.42253884 0.43308914 0.43281784 0.43389537] 0.43656201779811243\n[0.42253884 0.43308914 0.43281784 0.43389537 0.43656202] 0.437755791032309\n[0.43308914 0.43281784 0.43389537 0.43656202 0.43775579] 0.45292623615776995\n[0.43281784 0.43389537 0.43656202 0.43775579 0.45292624] 0.4603215173833629\n[0.43389537 0.43656202 0.43775579 0.45292624 0.46032152] 0.4598796537374681\n[0.43656202 0.43775579 0.45292624 0.46032152 0.45987965] 0.46183314425852773\n[0.43775579 0.45292624 0.46032152 0.45987965 0.46183314] 0.4864143886135711\n[0.45292624 0.46032152 0.45987965 0.46183314 0.48641439] 0.4915383729636347\n[0.46032152 0.45987965 0.46183314 0.48641439 0.49153837] 0.4945383485785301\n[0.45987965 0.46183314 0.48641439 0.49153837 0.49453835] 0.502755328062606\n[0.46183314 0.48641439 0.49153837 0.49453835 0.50275533] 0.5063366950516948\n[0.48641439 0.49153837 0.49453835 0.50275533 0.5063367 ] 0.5028251159575987\n[0.49153837 0.49453835 0.50275533 0.5063367 0.50282512] 0.5083832043166829\n[0.49453835 0.50275533 0.5063367 0.50282512 0.5083832 ] 0.5067320489380313\n[0.50275533 0.5063367 0.50282512 0.5083832 0.50673205] 0.5041041955853481\n[0.5063367 0.50282512 0.5083832 0.50673205 0.5041042 ] 0.5055149975684002\n[0.50282512 0.5083832 0.50673205 0.5041042 0.505515 ] 0.5067707990607155\n[0.5083832 0.50673205 0.5041042 0.505515 0.5067708 ] 0.5105382508241596\n[0.50673205 0.5041042 0.505515 0.5067708 0.51053825] 0.5151273535595883\n[0.5041042 0.505515 0.5067708 0.51053825 0.51512735] 0.4970034425787906\n[0.505515 0.5067708 0.51053825 0.51512735 0.49700344] 0.5052359405614265\n[0.5067708 0.51053825 0.51512735 0.49700344 0.50523594] 0.5081584167061418\n[0.51053825 0.51512735 0.49700344 0.50523594 0.50815842] 0.5142436270429869\n[0.51512735 0.49700344 0.50523594 0.50815842 0.51424363] 0.5196389560496485\n[0.49700344 0.50523594 0.50815842 0.51424363 0.51963896] 0.5243986250608724\n[0.50523594 0.50815842 0.51424363 0.51963896 0.52439863] 0.535700839266898\n[0.50815842 0.51424363 0.51963896 0.52439863 0.53570084] 0.5371969476435028\n[0.51424363 0.51963896 0.52439863 0.53570084 0.53719695] 0.5406155552801689\n[0.51963896 0.52439863 0.53570084 0.53719695 0.54061556] 0.5320729625182508\n[0.52439863 0.53570084 0.53719695 0.54061556 0.53207296] 0.5336931274110924\n[0.53570084 0.53719695 0.54061556 0.53207296 0.53369313] 0.5403984800200036\n[0.53719695 0.54061556 0.53207296 0.53369313 0.54039848] 0.5466620162695182\n[0.54061556 0.53207296 0.53369313 0.54039848 0.54666202] 0.5411504368948038\n[0.53207296 0.53369313 0.54039848 0.54666202 0.54115044] 0.5445380067591615\n[0.53369313 0.54039848 0.54666202 0.54115044 0.54453801] 0.5464526998710386\n[0.54039848 0.54666202 0.54115044 0.54453801 0.5464527 ] 0.5204219041854472\n[0.54666202 0.54115044 0.54453801 0.5464527 0.5204219 ] 0.5150498533142198\n[0.54115044 0.54453801 0.5464527 0.5204219 0.51504985] 0.5231893325529272\n[0.54453801 0.5464527 0.5204219 0.51504985 0.52318933] 0.5211428240631275\n[0.5464527 0.5204219 0.51504985 0.52318933 0.52114282] 0.5145924719449533\n[0.5204219 0.51504985 0.52318933 0.52114282 0.51459247] 0.5126312690735177\n[0.51504985 0.52318933 0.52114282 0.51459247 0.51263127] 0.5262668084131911\n[0.52318933 0.52114282 0.51459247 0.51263127 0.52626681] 0.5210420456823248\n[0.52114282 0.51459247 0.51263127 0.52626681 0.52104205] 0.5278792136691586\n[0.51459247 0.51263127 0.52626681 0.52104205 0.52787921] 0.5260497804395242\n[0.51263127 0.52626681 0.52104205 0.52787921 0.52604978] 0.5326388826803826\n[0.52626681 0.52104205 0.52787921 0.52604978 0.53263888] 0.5223133664483885\n[0.52104205 0.52787921 0.52604978 0.53263888 0.52231337] 0.5029801629596455\n[0.52787921 0.52604978 0.53263888 0.52231337 0.50298016] 0.5131816226754028\n[0.52604978 0.53263888 0.52231337 0.50298016 0.51318162] 0.4955848344475542\n[0.53263888 0.52231337 0.50298016 0.51318162 0.49558483] 0.488561628197675\n[0.52231337 0.50298016 0.51318162 0.49558483 0.48856163] 0.4807865123594941\n[0.50298016 0.51318162 0.49558483 0.48856163 0.48078651] 0.4908717224823871\n[0.51318162 0.49558483 0.48856163 0.48078651 0.49087172] 0.4869802607140161\n[0.49558483 0.48856163 0.48078651 0.49087172 0.48698026] 0.4962050232309302\n[0.48856163 0.48078651 0.49087172 0.48698026 0.49620502] 0.5041196675725981\n[0.48078651 0.49087172 0.48698026 0.49620502 0.50411967] 0.5051196909685622\n[0.49087172 0.48698026 0.49620502 0.50411967 0.50511969] 0.5157707731919001\n[0.48698026 0.49620502 0.50411967 0.50511969 0.51577077] 0.5183521167850248\n[0.49620502 0.50411967 0.50511969 0.51577077 0.51835212] 0.5251970444087328\n[0.50411967 0.50511969 0.51577077 0.51835212 0.52519704] 0.5232048045401773\n[0.50511969 0.51577077 0.51835212 0.52519704 0.5232048 ] 0.5324915945400215\n[0.51577077 0.51835212 0.52519704 0.5232048 0.53249159] 0.5367473724224205\n[0.51835212 0.52519704 0.5232048 0.53249159 0.53674737] 0.5346233629120639\n[0.52519704 0.5232048 0.53249159 0.53674737 0.53462336] 0.5383209802692054\n[0.5232048 0.53249159 0.53674737 0.53462336 0.53832098] 0.5440729130395192\n[0.53249159 0.53674737 0.53462336 0.53832098 0.54407291] 0.5210963143035691\n[0.53674737 0.53462336 0.53832098 0.54407291 0.52109631] 0.518848249052164\n[0.53462336 0.53832098 0.54407291 0.52109631 0.51884825] 0.5082204449642602\n[0.53832098 0.54407291 0.52109631 0.51884825 0.50822044] 0.5139878489466357\n[0.54407291 0.52109631 0.51884825 0.50822044 0.51398785] 0.5054374973230318\n[0.52109631 0.51884825 0.50822044 0.51398785 0.5054375 ] 0.5056933227058814\n[0.51884825 0.50822044 0.51398785 0.5054375 0.50569332] 0.5053522382159775\n[0.50822044 0.51398785 0.5054375 0.50569332 0.50535224] 0.5001274747099228\n[0.51398785 0.5054375 0.50569332 0.50535224 0.50012747] 0.5034685355899104\n[0.5054375 0.50569332 0.50535224 0.50012747 0.50346854] 0.5045537708064304\n[0.50569332 0.50535224 0.50012747 0.50346854 0.50455377] 0.5025615309378749\n[0.50535224 0.50012747 0.50346854 0.50455377 0.50256153] 0.4995460833357295\n[0.50012747 0.50346854 0.50455377 0.50256153 0.49954608] 0.4874066050861625\n[0.50346854 0.50455377 0.50256153 0.49954608 0.48740661] 0.49294922223318527\n[0.50455377 0.50256153 0.49954608 0.48740661 0.49294922] 0.4993677581982483\n[0.50256153 0.49954608 0.48740661 0.49294922 0.49936776] 0.49901891407147037\n[0.49954608 0.48740661 0.49294922 0.49936776 0.49901891] 0.502693347090986\n[0.48740661 0.49294922 0.49936776 0.49901891 0.50269335] 0.4903058023202552\n[0.49294922 0.49936776 0.49901891 0.50269335 0.4903058 ] 0.49006549621115414\n[0.49936776 0.49901891 0.50269335 0.4903058 0.4900655 ] 0.4869569825785819\n[0.49901891 0.50269335 0.4903058 0.4900655 0.48695698] 0.5009336536946574\n[0.50269335 0.4903058 0.4900655 0.48695698 0.50093365] 0.502724337576796\n[0.4903058 0.4900655 0.48695698 0.50093365 0.50272434] 0.4982902818434142\n[0.4900655 0.48695698 0.50093365 0.50272434 0.49829028] 0.49411200498157204\n[0.48695698 0.50093365 0.50272434 0.49829028 0.494112 ] 0.49249959972560453\n[0.50093365 0.50272434 0.49829028 0.494112 0.4924996 ] 0.49829799419379006\n[0.50272434 0.49829028 0.494112 0.4924996 0.49829799] 0.5047165774453515\n[0.49829028 0.494112 0.4924996 0.49829799 0.50471658] 0.5122824249467398\n[0.494112 0.4924996 0.49829799 0.50471658 0.51228242] 0.510739807585765\n[0.4924996 0.49829799 0.50471658 0.51228242 0.51073981] 0.5038483702024985\n[0.49829799 0.50471658 0.51228242 0.51073981 0.50384837] 0.5033522387105478\n[0.50471658 0.51228242 0.51073981 0.50384837 0.50335224] 0.5096622839444325\n[0.51228242 0.51073981 0.50384837 0.50335224 0.50966228] 0.5023367433273337\n[0.51073981 0.50384837 0.50335224 0.50966228 0.50233674] 0.5043367428327634\n[0.50384837 0.50335224 0.50966228 0.50233674 0.50433674] 0.5066545486926628\n[0.50335224 0.50966228 0.50233674 0.50433674 0.50665455] 0.5089956334631848\n[0.50966228 0.50233674 0.50433674 0.50665455 0.50899563] 0.5012592196860012\n[0.50233674 0.50433674 0.50665455 0.50899563 0.50125922] 0.4975228056948655\n[0.50433674 0.50665455 0.50899563 0.50125922 0.49752281] 0.49342207558970175\n[0.50665455 0.50899563 0.50125922 0.49752281 0.49342208] 0.4987011077170007\n[0.50899563 0.50125922 0.49752281 0.49342208 0.49870111] 0.5062747140800747\n[0.50125922 0.49752281 0.49342208 0.49870111 0.50627471] 0.5069491241981966\n[0.49752281 0.49342208 0.49870111 0.50627471 0.50694912] 0.503910398460617\n[0.49342208 0.49870111 0.50627471 0.50694912 0.5039104 ] 0.4980499284478147\n[0.49870111 0.50627471 0.50694912 0.5039104 0.49804993] 0.5010654225612701\n[0.50627471 0.50694912 0.5039104 0.49804993 0.50106542] 0.5162823766711009\n[0.50694912 0.5039104 0.49804993 0.50106542 0.51628238] 0.5201660788025977\n[0.5039104 0.49804993 0.50106542 0.51628238 0.52016608] 0.5276156286494348\n[0.49804993 0.50106542 0.51628238 0.52016608 0.52761563] 0.5230885549473131\n[0.50106542 0.51628238 0.52016608 0.52761563 0.52308855] 0.5265923744045349\n[0.51628238 0.52016608 0.52761563 0.52308855 0.52659237] 0.5218559843039335\n[0.52016608 0.52761563 0.52308855 0.52659237 0.52185598] 0.5360264525447402\n[0.52761563 0.52308855 0.52659237 0.52185598 0.53602645] 0.5429488804138166\n[0.52308855 0.52659237 0.52185598 0.53602645 0.54294888] 0.5414837625230219\n[0.52659237 0.52185598 0.53602645 0.54294888 0.54148376] 0.53807291452323\n[0.52185598 0.53602645 0.54294888 0.54148376 0.53807291] 0.5509798216348473\n[0.53602645 0.54294888 0.54148376 0.53807291 0.55097982] 0.5498790671445789\n[0.54294888 0.54148376 0.53807291 0.55097982 0.54987907] 0.5513131472630652\n[0.54148376 0.53807291 0.55097982 0.54987907 0.55131315] 0.5531115900068896\n[0.53807291 0.55097982 0.54987907 0.55131315 0.55311159] 0.5532511185103766\n[0.55097982 0.54987907 0.55131315 0.55311159 0.55325112] 0.5537394911406415\n[0.54987907 0.55131315 0.55311159 0.55325112 0.55373949] 0.5472589261422716\n[0.55131315 0.55311159 0.55325112 0.55373949 0.54725893] 0.5501658830132384\n[0.55311159 0.55325112 0.55373949 0.54725893 0.55016588] 0.5348559101594792\n[0.55325112 0.55373949 0.54725893 0.55016588 0.53485591] 0.5364760277658225\n[0.55373949 0.54725893 0.55016588 0.53485591 0.53647603] 0.5386388339101734\n[0.54725893 0.55016588 0.53485591 0.53647603 0.53863883] 0.53804192403742\n[0.55016588 0.53485591 0.53647603 0.53863883 0.53804192] 0.5742509958763312\n[0.53485591 0.53647603 0.53863883 0.53804192 0.574251 ] 0.5725765615870568\n[0.53647603 0.53863883 0.53804192 0.574251 0.57257656] 0.5722122462482172\n[0.53863883 0.53804192 0.574251 0.57257656 0.57221225] 0.579080404720861\n[0.53804192 0.574251 0.57257656 0.57221225 0.5790804 ] 0.579142385692481\n[0.574251 0.57257656 0.57221225 0.5790804 0.57914239] 0.5844912049395842\n[0.57257656 0.57221225 0.5790804 0.57914239 0.5844912 ] 0.5793904987249547\n[0.57221225 0.5790804 0.57914239 0.5844912 0.5793905 ] 0.5851500965591461\n[0.5790804 0.57914239 0.5844912 0.5793905 0.5851501 ] 0.5902043410759038\n[0.57914239 0.5844912 0.5793905 0.5851501 0.59020434] 0.5835454982265512\n[0.5844912 0.5793905 0.5851501 0.59020434 0.5835455 ] 0.5810725973029182\n[0.5793905 0.5851501 0.59020434 0.5835455 0.5810726 ] 0.5792742018455921\n[0.5851501 0.59020434 0.5835455 0.5810726 0.5792742 ] 0.5794679989703232\n[0.59020434 0.5835455 0.5810726 0.5792742 0.579468 ] 0.575522268580708\n[0.5835455 0.5810726 0.5792742 0.579468 0.57552227] 0.5845067242133326\n[0.5810726 0.5792742 0.579468 0.57552227 0.58450672] 0.5741114673728442\n[0.5792742 0.579468 0.57552227 0.58450672 0.57411147] 0.5735610664844609\n[0.579468 0.57552227 0.58450672 0.57411147 0.57356107] 0.5860493415743074\n[0.57552227 0.58450672 0.57411147 0.57356107 0.58604934] 0.58718884618726\n[0.58450672 0.57411147 0.57356107 0.58604934 0.58718885] 0.590793491311783\n[0.57411147 0.57356107 0.58604934 0.58718885 0.59079349] 0.6013360362926324\n[0.57356107 0.58604934 0.58718885 0.59079349 0.60133604] 0.5960648110887061\n[0.58604934 0.58718885 0.59079349 0.60133604 0.59606481] 0.5761036599549014\n[0.58718885 0.59079349 0.60133604 0.59606481 0.57610366] 0.5759098628301703\n[0.59079349 0.60133604 0.59606481 0.57610366 0.57590986] 0.5671967518541436\n[0.60133604 0.59606481 0.57610366 0.57590986 0.56719675] 0.5582898437533859\n[0.59606481 0.57610366 0.57590986 0.56719675 0.55828984] 0.5633053373722711\n[0.57610366 0.57590986 0.56719675 0.55828984 0.56330534] 0.5735610664844609\n[0.57590986 0.56719675 0.55828984 0.56330534 0.57356107] 0.5832897201302\n[0.56719675 0.55828984 0.56330534 0.57356107 0.58328972] 0.5880338698676756\n[0.55828984 0.56330534 0.57356107 0.58328972 0.58803387] 0.5911733266995595\n[0.56330534 0.57356107 0.58328972 0.58803387 0.59117333] 0.5906927129309804\n[0.57356107 0.58328972 0.58803387 0.59117333 0.59069271] 0.590793491311783\n[0.58328972 0.58803387 0.59117333 0.59069271 0.59079349] 0.5974136313249498\n[0.58803387 0.59117333 0.59069271 0.59079349 0.59741363] 0.6090724020081292\n[0.59117333 0.59069271 0.59079349 0.59741363 0.6090724 ] 0.6191111504331506\n[0.59069271 0.59079349 0.59741363 0.6090724 0.61911115] 0.6141034629624496\n[0.59079349 0.59741363 0.6090724 0.61911115 0.61410346] 0.6096615530191968\n[0.59741363 0.6090724 0.61911115 0.61410346 0.60966155] 0.6086383460607954\n[0.6090724 0.61911115 0.61410346 0.60966155 0.60863835] 0.6059174266776853\n[0.61911115 0.61410346 0.60966155 0.60863835 0.60591743] 0.603297285675378\n[0.61410346 0.60966155 0.60863835 0.60591743 0.60329729] 0.5975841503142467\n[0.60966155 0.60863835 0.60591743 0.60329729 0.59758415] 0.5966306839643395\n[0.60863835 0.60591743 0.60329729 0.59758415 0.59663068] 0.5952818629529074\n[0.60591743 0.60329729 0.59758415 0.59663068 0.59528186] 0.6097003504283794\n[0.60329729 0.59758415 0.59663068 0.59528186 0.60970035] 0.62325062988581\n[0.59758415 0.59663068 0.59528186 0.60970035 0.62325063] 0.6262894029098881\n[0.59663068 0.59528186 0.60970035 0.62325063 0.6262894 ] 0.6385606027395693\n[0.59528186 0.60970035 0.62325063 0.6262894 0.6385606 ] 0.6422117103371523\n[0.60970035 0.62325063 0.6262894 0.6385606 0.64221171] 0.64168463564589\n[0.62325063 0.6262894 0.6385606 0.64221171 0.64168464] 0.6388551782451031\n[0.6262894 0.6385606 0.64221171 0.64168464 0.63885518] 0.6411110039085708\n[0.6385606 0.64221171 0.64168464 0.63885518 0.641111 ] 0.6540876516286823\n[0.64221171 0.64168464 0.63885518 0.641111 0.65408765] 0.65370005737922\n[0.64168464 0.63885518 0.641111 0.65408765 0.65370006] 0.6616224605825737\n[0.63885518 0.641111 0.65408765 0.65370006 0.66162246] 0.6599248435060504\n[0.641111 0.65408765 0.65370006 0.66162246 0.65992484] 0.6659092762372811\n[0.65408765 0.65370006 0.66162246 0.65992484 0.66590928] 0.6800952637518363\n[0.65370006 0.66162246 0.65992484 0.66590928 0.68009526] 0.6910718646800196\n[0.66162246 0.65992484 0.66590928 0.68009526 0.69107186] 0.6866300493097636\n[0.65992484 0.66590928 0.68009526 0.69107186 0.68663005] 0.6913819586841133\n[0.66590928 0.68009526 0.69107186 0.68663005 0.69138196] 0.695622217292764\n[0.68009526 0.69107186 0.68663005 0.69138196 0.69562222] 0.6954206605311587\n[0.69107186 0.68663005 0.69138196 0.69562222 0.69542066] 0.6862036576511187\n[0.68663005 0.69138196 0.69562222 0.69542066 0.68620366] 0.691048585769397\n[0.69138196 0.69562222 0.69542066 0.68620366 0.69104859] 0.689312171283691\n[0.69562222 0.69542066 0.68620366 0.69104859 0.68931217] 0.6460567096324633\n[0.69542066 0.68620366 0.69104859 0.68931217 0.64605671] 0.6025686542225103\n[0.68620366 0.69104859 0.68931217 0.64605671 0.60256865] 0.6217932724070779\n[0.69104859 0.68931217 0.64605671 0.60256865 0.62179327] 0.5969717211677449\n[0.68931217 0.64605671 0.60256865 0.62179327 0.59697172] 0.5604914000204211\n[0.64605671 0.60256865 0.62179327 0.59697172 0.5604914 ] 0.5885997419681205\n[0.60256865 0.62179327 0.59697172 0.5604914 0.58859974] 0.5995763428963039\n[0.62179327 0.59697172 0.5604914 0.58859974 0.59957634] 0.5997004001877292\n[0.59697172 0.5604914 0.58859974 0.59957634 0.5997004 ] 0.6133436983890883\n[0.5604914 0.58859974 0.59957634 0.5997004 0.6133437 ] 0.6287079879257785\n[0.58859974 0.59957634 0.5997004 0.6133437 0.62870799] 0.6328010056805664\n[0.59957634 0.5997004 0.6133437 0.62870799 0.63280101] 0.6387388813657404\n[0.5997004 0.6133437 0.62870799 0.63280101 0.63873888] 0.6456225591121326\n[0.6133437 0.62870799 0.63280101 0.63873888 0.64562256] 0.6419714515145496\n[0.62870799 0.63280101 0.63873888 0.64562256 0.64197145] 0.6575992780092769\n[0.63280101 0.63873888 0.64562256 0.64197145 0.65759928] 0.6707464447186852\n[0.63873888 0.64562256 0.64197145 0.65759928 0.67074644] 0.6510101757684185\n[0.64562256 0.64197145 0.65759928 0.67074644 0.65101018] 0.6404985739868806\n[0.64197145 0.65759928 0.67074644 0.65101018 0.64049857] 0.6132042179472882\n[0.65759928 0.67074644 0.65101018 0.64049857 0.61320422] 0.6204910084417027\n[0.67074644 0.65101018 0.64049857 0.61320422 0.62049101] 0.629801030065671\n[0.65101018 0.64049857 0.61320422 0.62049101 0.62980103] 0.6330025624421718\n[0.64049857 0.61320422 0.62049101 0.62980103 0.63300256] 0.6443047766481973\n[0.61320422 0.62049101 0.62980103 0.63300256 0.64430478] 0.656986848862775\n[0.62049101 0.62980103 0.63300256 0.64430478 0.65698685] 0.683374295598517\n[0.62980103 0.63300256 0.64430478 0.65698685 0.6833743 ] 0.686831606071369\n[0.63300256 0.64430478 0.65698685 0.6833743 0.68683161] 0.6664209270029803\n[0.64430478 0.65698685 0.6833743 0.68683161 0.66642093] 0.675196018950627\n[0.65698685 0.6833743 0.68683161 0.66642093 0.67519602] 0.6752657595591212\n[0.6833743 0.68683161 0.66642093 0.67519602 0.67526576] 0.6645294174535407\n[0.68683161 0.66642093 0.67519602 0.67526576 0.66452942] 0.6366923721007522\n[0.66642093 0.67519602 0.67526576 0.66452942 0.63669237] 0.635056735995849\n[0.67519602 0.67526576 0.66452942 0.63669237 0.63505674] 0.6297622326564885\n[0.67526576 0.66452942 0.63669237 0.63505674 0.62976223] 0.5973593154172072\n[0.66452942 0.63669237 0.63505674 0.62976223 0.59735932] 0.5760339193464071\n[0.63669237 0.63505674 0.62976223 0.59735932 0.57603392] 0.6005608477937079\n[0.63505674 0.62976223 0.59735932 0.57603392 0.60056085] 0.5632665407382769\n[0.62976223 0.59735932 0.57603392 0.60056085 0.56326654] 0.5628479560030045\n[0.59735932 0.57603392 0.60056085 0.56326654 0.56284796] 0.5839563706114477\n[0.57603392 0.60056085 0.56326654 0.56284796 0.58395637] 0.564328545105861\n[0.60056085 0.56326654 0.56284796 0.58395637 0.56432855] 0.5697083548387741\n[0.56326654 0.56284796 0.58395637 0.56432855 0.56970835] 0.5788013484890756\n[0.56284796 0.58395637 0.56432855 0.56970835 0.57880135] 0.580871135889498\n[0.58395637 0.56432855 0.56970835 0.57880135 0.58087114] 0.5647704079765674\n[0.56432855 0.56970835 0.57880135 0.58087114 0.56477041] 0.5712897696089316\n[0.56970835 0.57880135 0.58087114 0.56477041 0.57128977] 0.583840073732085\n[0.57880135 0.58087114 0.56477041 0.57128977 0.58384007] 0.5747935898413419\n[0.58087114 0.56477041 0.57128977 0.58384007 0.57479359] 0.5845144838502068\n[0.56477041 0.57128977 0.58384007 0.57479359 0.58451448] 0.5820028808655764\n[0.57128977 0.58384007 0.57479359 0.58451448 0.58200288] 0.588754741683669\n[0.58384007 0.57479359 0.58451448 0.58200288 0.58875474] 0.616801103434937\n[0.57479359 0.58451448 0.58200288 0.58875474 0.6168011 ] 0.6151886508924711\n[0.58451448 0.58200288 0.58875474 0.6168011 0.61518865] 0.6272970913697296\n[0.58200288 0.58875474 0.6168011 0.61518865 0.62729709] 0.6158708206474671\n[0.58875474 0.6168011 0.61518865 0.62729709 0.61587082] 0.6115995242665081\n[0.6168011 0.61518865 0.62729709 0.61587082 0.61159952] 0.5748013487030277\n[0.61518865 0.62729709 0.61587082 0.61159952 0.57480135] 0.5757315849791875\n[0.62729709 0.61587082 0.61159952 0.57480135 0.57573158] 0.590351675727575\n[0.61587082 0.61159952 0.57480135 0.57573158 0.59035168] 0.582607550375204\n[0.61159952 0.57480135 0.57573158 0.59035168 0.58260755] 0.5727471278628522\n[0.57480135 0.57573158 0.59035168 0.58260755 0.57274713] 0.5882354266292809\n[0.57573158 0.59035168 0.58260755 0.57274713 0.58823543] 0.578212197478008\n[0.59035168 0.58260755 0.57274713 0.58823543 0.5782122 ] 0.5777005467123089\n[0.58260755 0.57274713 0.58823543 0.5782122 0.57770055] 0.5966849052990854\n[0.57274713 0.58823543 0.5782122 0.57770055 0.59668491] 0.6017857060867116\n[0.58823543 0.5782122 0.57770055 0.59668491 0.60178571] 0.6011035363317155\n[0.5782122 0.57770055 0.59668491 0.60178571 0.60110354] 0.6234677059211639\n[0.57770055 0.59668491 0.60178571 0.60110354 0.62346771] 0.634948197978172\n[0.59668491 0.60178571 0.60110354 0.62346771 0.6349482 ] 0.6354831276544939\n[0.60178571 0.60110354 0.62346771 0.6349482 0.63548313] 0.636986947606286\n[0.60110354 0.62346771 0.6349482 0.63548313 0.63698695] 0.6207312672643054\n[0.62346771 0.6349482 0.63548313 0.63698695 0.62073127] 0.6227002770591136\n[0.6349482 0.63548313 0.63698695 0.62073127 0.62270028] 0.6202351357723547\n[0.63548313 0.63698695 0.62073127 0.62270028 0.62023514] 0.6107545951590894\n[0.63698695 0.62073127 0.62270028 0.62023514 0.6107546 ] 0.621002564634405\n[0.62073127 0.62270028 0.62023514 0.6107546 0.62100256] 0.6133669765245224\n[0.62270028 0.62023514 0.6107546 0.62100256 0.61336698] 0.6210878237414592\n[0.62023514 0.6107546 0.62100256 0.61336698 0.62108782] 0.6207390269011795\n[0.6107546 0.62100256 0.61336698 0.62108782 0.62073903] 0.6179638861833238\n[0.62100256 0.61336698 0.62108782 0.62073903 0.61796389] 0.6060724263932338\n[0.61336698 0.62108782 0.62073903 0.61796389 0.60607243] 0.6118709162096044\n[0.62108782 0.62073903 0.61796389 0.60607243 0.61187092] 0.6251963607699956\n[0.62073903 0.61796389 0.60607243 0.61187092 0.62519636] 0.6519481236197656\n[0.61796389 0.60607243 0.61187092 0.62519636 0.65194812] 0.6672891342458332\n[0.60607243 0.61187092 0.62519636 0.65194812 0.66728913] 0.6675759501144929\n[0.61187092 0.62519636 0.65194812 0.66728913 0.66757595] 0.6654209036070162\n[0.62519636 0.65194812 0.66728913 0.66757595 0.6654209 ] 0.6553279338472491\n[0.65194812 0.66728913 0.66757595 0.6654209 0.65532793] 0.6530101279873497\n[0.66728913 0.66757595 0.6654209 0.65532793 0.65301013] 0.6600798432215988\n[0.66757595 0.6654209 0.65532793 0.65301013 0.66007984] 0.6673123178082707\n[0.6654209 0.65532793 0.65301013 0.66007984 0.66731232] 0.6638007860006729\n[0.65532793 0.65301013 0.66007984 0.66731232 0.66380079] 0.677234768578741\n[0.65301013 0.66007984 0.66731232 0.66380079 0.67723477] 0.6773433065964178\n[0.66007984 0.66731232 0.66380079 0.67723477 0.67734331] 0.6937772647893812\n[0.66731232 0.66380079 0.67723477 0.67734331 0.69377726] 0.6895913228636615\n[0.66380079 0.67723477 0.67734331 0.69377726 0.68959132] 0.6909710862992169\n[0.67723477 0.67734331 0.69377726 0.68959132 0.69097109] 0.6815293430951342\n[0.67734331 0.69377726 0.68959132 0.69097109 0.68152934] 0.6798393903072999\n[0.69377726 0.68959132 0.69097109 0.68152934 0.67983939] 0.6560644202851762\n[0.68959132 0.69097109 0.68152934 0.67983939 0.65606442] 0.6511418973485327\n[0.69097109 0.68152934 0.67983939 0.65606442 0.6511419 ] 0.6399171826126872\n[0.68152934 0.67983939 0.65606442 0.6511419 0.63991718] 0.6478551058649777\n[0.67983939 0.65606442 0.6511419 0.63991718 0.64785511] 0.6489636665034303\n[0.65606442 0.6511419 0.63991718 0.64785511 0.64896367] 0.6581185938388534\n[0.6511419 0.63991718 0.64785511 0.64896367 0.65811859] 0.6390722542804569\n[0.63991718 0.64785511 0.64896367 0.65811859 0.63907225] 0.6556457882634055\n[0.64785511 0.64896367 0.65811859 0.63907225 0.65564579] 0.6679713040008294\n[0.64896367 0.65811859 0.63907225 0.65564579 0.6679713 ] 0.6787309242418441\n[0.65811859 0.63907225 0.65564579 0.6679713 0.67873092] 0.6777928818175001\n[0.63907225 0.65564579 0.6679713 0.67873092 0.67779288] 0.6786146265872931\n[0.65564579 0.6679713 0.67873092 0.67779288 0.67861463] 0.7015446682771863\n[0.6679713 0.67873092 0.67779288 0.67861463 0.70154467] 0.7056841485050344\n[0.67873092 0.67779288 0.67861463 0.70154467 0.70568415] 0.7018392437827201\n[0.67779288 0.67861463 0.70154467 0.70568415 0.70183924] 0.7134206095687163\n[0.67861463 0.70154467 0.70568415 0.70183924 0.71342061] 0.7111570250435628\n[0.70154467 0.70568415 0.70183924 0.71342061 0.71115703] 0.7042423095248622\n[0.70568415 0.70183924 0.71342061 0.71115703 0.70424231] 0.7026532296908273\n[0.70183924 0.71342061 0.71115703 0.70424231 0.70265323] 0.7186143345272742\n[0.71342061 0.71115703 0.70424231 0.70265323 0.71861433] 0.7516218267031862\n[0.71115703 0.70424231 0.70265323 0.71861433 0.75162183] 0.7637302671804447\n[0.70424231 0.70265323 0.71861433 0.75162183 0.76373027] 0.7673193938064077\n[0.70265323 0.71861433 0.75162183 0.76373027 0.76731939] 0.7441955549917832\n[0.71861433 0.75162183 0.76373027 0.76731939 0.74419555] 0.7296530110000743\n[0.75162183 0.76373027 0.76731939 0.74419555 0.72965301] 0.7277305590265114\n[0.76373027 0.76731939 0.74419555 0.72965301 0.72773056] 0.7298623273985538\n[0.76731939 0.74419555 0.72965301 0.72773056 0.72986233] 0.7346219956345894\n[0.74419555 0.72965301 0.72773056 0.72986233 0.734622 ] 0.7327304868603381\n[0.72965301 0.72773056 0.72986233 0.734622 0.73273049] 0.7335522324053195\n[0.72773056 0.72986233 0.734622 0.73273049 0.73355223] 0.747079233727316\n[0.72986233 0.734622 0.73273049 0.73355223 0.74707923] 0.7497071335913091\n[0.734622 0.73273049 0.73355223 0.74707923 0.74970713] 0.7524125344758591\n[0.73273049 0.73355223 0.74707923 0.74970713 0.75241253] 0.743505625599913\n[0.73355223 0.74707923 0.74970713 0.75241253 0.74350563] 0.7414901548824216\n[0.74707923 0.74970713 0.75241253 0.74350563 0.74149015] 0.7469862149833875\n[0.74970713 0.75241253 0.74350563 0.74149015 0.74698621] 0.7254980122736664\n[0.75241253 0.74350563 0.74149015 0.74698621 0.72549801] 0.7193817633893245\n[0.74350563 0.74149015 0.74698621 0.72549801 0.71938176] 0.7150949485098054\n[0.74149015 0.74698621 0.72549801 0.71938176 0.71509495] 0.7203740279236027\n[0.74698621 0.72549801 0.71938176 0.71509495 0.72037403] 0.7156065992755045\n[0.72549801 0.71938176 0.71509495 0.72037403 0.7156066 ] 0.7200328953720121\n[0.71938176 0.71509495 0.72037403 0.7156066 0.7200329 ] 0.7185213157833457\n[0.71509495 0.72037403 0.7156066 0.7200329 0.71852132] 0.7303584588905047\n[0.72037403 0.7156066 0.7200329 0.71852132 0.73035846] 0.7467691389480338\n[0.7156066 0.7200329 0.71852132 0.73035846 0.74676914] 0.738497938129212\n[0.7200329 0.71852132 0.73035846 0.74676914 0.73849794] 0.7525676287644044\n[0.71852132 0.73035846 0.74676914 0.73849794 0.75256763] 0.744676167985174\n[0.73035846 0.74676914 0.73849794 0.75256763 0.74467617] 0.728451430842505\n[0.74676914 0.73849794 0.75256763 0.74467617 0.72845143] 0.7120252322864158\n[0.73849794 0.75256763 0.74467617 0.72845143 0.71202523] 0.70387023377396\n[0.75256763 0.74467617 0.72845143 0.71202523 0.70387023] 0.6922113685177838\n[0.74467617 0.72845143 0.71202523 0.70387023 0.69221137] 0.6870873841677201\n[0.72845143 0.71202523 0.70387023 0.69221137 0.68708738] 0.6869401440890458\n[0.71202523 0.70387023 0.69221137 0.68708738 0.68694014] 0.6968005185397108\n[0.70387023 0.69221137 0.68708738 0.68694014 0.69680052] 0.6855292475329968\n[0.69221137 0.68708738 0.68694014 0.69680052 0.68552925] 0.6952268634064276\n[0.68708738 0.68694014 0.69680052 0.68552925 0.69522686] 0.6930563921981993\n[0.68694014 0.69680052 0.68552925 0.69522686 0.69305639] 0.6802813012396932\n[0.69680052 0.68552925 0.69522686 0.69305639 0.6802813 ] 0.6842889653144301\n[0.68552925 0.69522686 0.69305639 0.6802813 0.68428897] 0.6919400719228725\n[0.69522686 0.69305639 0.6802813 0.68428897 0.69194007] 0.7041725689163679\n[0.69305639 0.6802813 0.68428897 0.69194007 0.70417257] 0.68806412942825\n[0.6802813 0.68428897 0.69194007 0.70417257 0.68806413] 0.6937075241808869\n[0.68428897 0.69194007 0.70417257 0.68806413 0.69370752] 0.702451672929222\n[0.69194007 0.70417257 0.68806413 0.69370752 0.70245167] 0.699226862417287\n[0.70417257 0.68806413 0.69370752 0.70245167 0.69922686] 0.7101957990567813\n[0.68806413 0.69370752 0.70245167 0.69922686 0.7101958 ] 0.7092887944047458\n[0.69370752 0.70245167 0.69922686 0.7101958 0.70928879] 0.7107152094593547\n[0.70245167 0.69922686 0.7101958 0.70928879 0.71071521] 0.7144360568902436\n[0.69922686 0.7101958 0.70928879 0.71071521 0.71443606] 0.7166375658707802\n[0.7101958 0.70928879 0.71071521 0.71443606 0.71663757] 0.689692005896279\n[0.70928879 0.71071521 0.71443606 0.71663757 0.68969201] 0.6812889896995347\n[0.71071521 0.71443606 0.71663757 0.68969201 0.68128899] 0.6747929062026047\n[0.71443606 0.71663757 0.68969201 0.68128899 0.67479291] 0.6669247235588085\n[0.71663757 0.68969201 0.68128899 0.67479291 0.66692472] 0.6222738854004686\n[0.68969201 0.68128899 0.67479291 0.66692472 0.62227389] 0.6208010078727997\n[0.68128899 0.67479291 0.66692472 0.62227389 0.62080101] 0.6446458138516028\n[0.67479291 0.66692472 0.62227389 0.62080101 0.64464581] 0.6308242370240725\n[0.66692472 0.62227389 0.62080101 0.64464581 0.63082424] 0.6533279816283177\n[0.62227389 0.62080101 0.64464581 0.63082424 0.65332798] 0.6489946097027419\n[0.62080101 0.64464581 0.63082424 0.65332798 0.64899461] 0.6275064077682091\n[0.64464581 0.63082424 0.65332798 0.64899461 0.62750641] 0.6340877503721933\n[0.63082424 0.65332798 0.64899461 0.62750641 0.63408775] 0.637731192905899\n[0.65332798 0.64899461 0.62750641 0.63408775 0.63773119] 0.6396923477156476\n[0.64899461 0.62750641 0.63408775 0.63773119 0.63969235] 0.5986228765463967\n[0.62750641 0.63408775 0.63773119 0.63969235 0.59862288] 0.633397820980323\n[0.63408775 0.63773119 0.63969235 0.59862288 0.63339782] 0.6147157975359546\n[0.63773119 0.63969235 0.59862288 0.63339782 0.6147158 ] 0.5748788962348945\n[0.63969235 0.59862288 0.63339782 0.6147158 0.5748789 ] 0.5873826433119911\n[0.59862288 0.63339782 0.6147158 0.5748789 0.58738264] 0.6188243345644909\n[0.63339782 0.6147158 0.5748789 0.58738264 0.61882433] 0.613576292923002\n[0.6147158 0.5748789 0.58738264 0.61882433 0.61357629] 0.6041112715834851\n[0.5748789 0.58738264 0.61882433 0.61357629 0.60411127] 0.5903903785637608\n[0.58738264 0.61882433 0.61357629 0.60411127 0.59039038] 0.6025764138593844\n[0.61882433 0.61357629 0.60411127 0.59039038 0.60257641] 0.6317079635406739\n[0.61357629 0.60411127 0.59039038 0.60257641 0.63170796] 0.62318864891419\n[0.60411127 0.59039038 0.60257641 0.63170796 0.62318865] 0.6105918358066666\n[0.59039038 0.60257641 0.63170796 0.62318865 0.61059184] 0.5892586335876824\n[0.60257641 0.63170796 0.62318865 0.61059184 0.58925863] 0.587258681368751\n[0.63170796 0.62318865 0.61059184 0.58925863 0.58725868] 0.5931578542177391\n[0.62318865 0.61059184 0.58925863 0.58725868 0.59315785] 0.6094755155313398\n[0.61059184 0.58925863 0.58725868 0.59315785 0.60947552] 0.6069794310452694\n[0.58925863 0.58725868 0.59315785 0.60947552 0.60697943] 0.5748168679767761\n[0.58725868 0.59315785 0.60947552 0.60697943 0.57481687] 0.5792819614824664\n[0.59315785 0.60947552 0.60697943 0.57481687 0.57928196] 0.5884679258150094\n[0.60947552 0.60697943 0.57481687 0.57928196 0.58846793] 0.5778246032285458\n[0.60697943 0.57481687 0.57928196 0.58846793 0.5778246 ] 0.5970027589400534\n[0.57481687 0.57928196 0.58846793 0.5778246 0.59700276] 0.5937392455919326\n[0.57928196 0.58846793 0.5778246 0.59700276 0.59373925] 0.626157586756777\n[0.58846793 0.5778246 0.59700276 0.59373925 0.62615759] 0.6277622804375571\n[0.5778246 0.59700276 0.59373925 0.62615759 0.62776228] 0.6325141898119069\n[0.59700276 0.59373925 0.62615759 0.62776228 0.63251419] 0.641816451799001\n[0.59373925 0.62615759 0.62776228 0.63251419 0.64181645] 0.5987081356534509\n[0.62615759 0.62776228 0.63251419 0.64181645 0.59870814] 0.612591788025598\n[0.62776228 0.63251419 0.64181645 0.59870814 0.61259179] 0.5876694591806507\n[0.63251419 0.64181645 0.59870814 0.61259179 0.58766946] 0.5899718411149869\n[0.64181645 0.59870814 0.61259179 0.58766946 0.58997184] 0.5994291028176296\n[0.59870814 0.61259179 0.58766946 0.58997184 0.5994291 ] 0.6086771434699779\n[0.61259179 0.58766946 0.58997184 0.5994291 0.60867714] 0.6072972846862374\n[0.58766946 0.58997184 0.5994291 0.60867714 0.60729728] 0.591948515198484\n[0.58997184 0.5994291 0.60867714 0.60729728 0.59194852] 0.5721269863659746\n[0.5994291 0.60867714 0.60729728 0.59194852 0.57212699] 0.5815687295700572\n[0.60867714 0.60729728 0.59194852 0.57212699 0.58156873] 0.5771501931104239\n[0.60729728 0.59194852 0.57212699 0.58156873 0.57715019] 0.566607600843076\n[0.59194852 0.57212699 0.58156873 0.57715019 0.5666076 ] 0.5434527242561432\n[0.57212699 0.58156873 0.57715019 0.5666076 0.54345272] 0.5408790930133943\n[0.58156873 0.57715019 0.5666076 0.54345272 0.54087909] 0.5899020059334958\n[0.57715019 0.5666076 0.54345272 0.54087909 0.58990201] 0.5933283732070361\n[0.5666076 0.54345272 0.54087909 0.58990201 0.59332837] 0.5880570534301129\n[0.54345272 0.54087909 0.58990201 0.59332837 0.58805705] 0.5869175488171604\n[0.54087909 0.58990201 0.59332837 0.58805705 0.58691755] 0.5948554720694509\n[0.58990201 0.59332837 0.58805705 0.58691755 0.59485547] 0.5717626237406365\n[0.59332837 0.58805705 0.58691755 0.59485547 0.57176262] 0.614126646524887\n[0.58805705 0.58691755 0.59485547 0.57176262 0.61412665] 0.612328251067561\n[0.58691755 0.59485547 0.57176262 0.61412665 0.61232825] 0.6184444991767145\n[0.59485547 0.57176262 0.61412665 0.61232825 0.6184445 ] 0.6171886976843992\n[0.57176262 0.61412665 0.61232825 0.6184445 0.6171887 ] 0.6138320710193532\n[0.61412665 0.61232825 0.6184445 0.6171887 0.61383207] 0.6036460825156575\n[0.61232825 0.6184445 0.6171887 0.61383207 0.60364608] 0.5939562262791012\n[0.6184445 0.6171887 0.61383207 0.60364608 0.59395623] 0.6191189092948364\n[0.6171887 0.61383207 0.60364608 0.59395623 0.61911891] 0.6220800882757375\n[0.61383207 0.60364608 0.59395623 0.61911891 0.62208009] 0.6290025626561239\n[0.60364608 0.59395623 0.61911891 0.62208009 0.62900256] 0.6354831276544939\n[0.59395623 0.61911891 0.62208009 0.62900256 0.63548313] 0.6139794064462127\n[0.61911891 0.62208009 0.62900256 0.63548313 0.61397941] 0.6178940510018327\n[0.62208009 0.62900256 0.63548313 0.61397941 0.61789405] 0.6165995466733316\n[0.62900256 0.63548313 0.61397941 0.61789405 0.61659955] 0.6298474917635427\n[0.63548313 0.61397941 0.61789405 0.61659955 0.62984749] 0.6136382738946221\n[0.61397941 0.61789405 0.61659955 0.62984749 0.61363827] 0.6063050209271476\n[0.61789405 0.61659955 0.62984749 0.61363827 0.60630502] 0.6283514314486247\n[0.61659955 0.62984749 0.61363827 0.60630502 0.62835143] 0.6495217797421894\n[0.62984749 0.61363827 0.60630502 0.62835143 0.64952178] 0.6451652242541761\n[0.61363827 0.60630502 0.62835143 0.64952178 0.64516522] 0.6622581686396981\n[0.60630502 0.62835143 0.64952178 0.64516522 0.66225817] 0.6724828592043911\n[0.62835143 0.64952178 0.64516522 0.66225817 0.67248286] 0.648638053225588\n[0.64952178 0.64516522 0.66225817 0.67248286 0.64863805] 0.6358319244947734\n[0.64516522 0.66225817 0.67248286 0.64863805 0.63583192] 0.6330025624421718\n[0.66225817 0.67248286 0.64863805 0.63583192 0.63300256] 0.6329637650329891\n[0.67248286 0.64863805 0.63583192 0.63300256 0.63296377] 0.653397722236812\n[0.64863805 0.63583192 0.63300256 0.63296377 0.65339772] 0.6524597743854648\n[0.63583192 0.63300256 0.63296377 0.65339772 0.65245977] 0.6536303167707258\n[0.63300256 0.63296377 0.65339772 0.65245977 0.65363032] 0.6474132895055812\n[0.63296377 0.65339772 0.65245977 0.65363032 0.64741329] 0.651219492166898\n[0.65339772 0.65245977 0.65363032 0.64741329 0.65121949] 0.6475295871601323\n[0.65245977 0.65363032 0.64741329 0.65121949 0.64752959] 0.6344831042585298\n[0.65363032 0.64741329 0.65121949 0.64752959 0.6344831 ] 0.6448706487486423\n[0.64741329 0.65121949 0.64752959 0.6344831 0.64487065] 0.644118738385152\n[0.65121949 0.64752959 0.6344831 0.64487065 0.64411874] 0.648560553755408\n[0.64752959 0.6344831 0.64487065 0.64411874 0.64856055] 0.6492737612827124\n[0.6344831 0.64487065 0.64411874 0.64856055 0.64927376] 0.6522737368976078\n[0.64487065 0.64411874 0.64856055 0.64927376 0.65227374] 0.6686069167097686\n[0.64411874 0.64856055 0.64927376 0.65227374 0.66860692] 0.6738859961235659\n[0.64856055 0.64927376 0.65227374 0.66860692 0.673886 ] 0.6849169129594919\n[0.64927376 0.65227374 0.66860692 0.673886 0.68491691] 0.6816843428106827\n[0.65227374 0.66860692 0.673886 0.68491691 0.68168434] 0.6703976478784056\n[0.66860692 0.673886 0.68491691 0.68168434 0.67039765] 0.6696378833050443\n[0.673886 0.68491691 0.68168434 0.67039765 0.66963788] 0.6955602363211439\n[0.68491691 0.68168434 0.67039765 0.66963788 0.69556024] 0.7090794780062663\n[0.68168434 0.67039765 0.66963788 0.69556024 0.70907948] 0.7091724967501947\n[0.67039765 0.66963788 0.69556024 0.70907948 0.7091725 ] 0.7031493619579664\n[0.66963788 0.69556024 0.70907948 0.7091725 0.70314936] 0.7023043382775509\n[0.69556024 0.70907948 0.7091725 0.70314936 0.70230434] 0.7021493385620023\n[0.70907948 0.7091725 0.70314936 0.70230434 0.70214934] 0.7134593124049021\n[0.7091725 0.70314936 0.70230434 0.70214934 0.71345931] 0.7329320436219435\n[0.70314936 0.70230434 0.70214934 0.71345931 0.73293204] 0.73880027327162\n[0.70230434 0.70214934 0.71345931 0.73293204 0.73880027] 0.7186143345272742\n[0.70214934 0.71345931 0.73293204 0.73880027 0.71861433] 0.7089244782907177\n[0.71345931 0.73293204 0.73880027 0.71861433 0.70892448] 0.7024283947937877\n[0.73293204 0.73880027 0.71861433 0.70892448 0.70242839] 0.6934362275859758\n[0.73880027 0.71861433 0.70892448 0.70242839 0.69343623] 0.6899246004301928\n[0.71861433 0.70892448 0.70242839 0.69343623 0.6899246 ] 0.6936610624830153\n[0.70892448 0.70242839 0.69343623 0.6899246 0.69366106] 0.7100330397043586\n[0.70242839 0.69343623 0.6899246 0.69366106 0.71003304] 0.7147306323957774\n[0.69343623 0.6899246 0.69366106 0.71003304 0.71473063] 0.7189399478051164\n[0.6899246 0.69366106 0.71003304 0.71473063 0.71893995] 0.725978625267057\n[0.69366106 0.71003304 0.71473063 0.71893995 0.72597863] 0.7198934141550236\n[0.71003304 0.71473063 0.71893995 0.72597863 0.71989341] 0.7173274952626505\n[0.71473063 0.71893995 0.72597863 0.71989341 0.7173275 ] 0.7122190294111469\n[0.71893995 0.72597863 0.71989341 0.7173275 0.71221903] 0.7160252312972752\n[0.72597863 0.71989341 0.7173275 0.71221903 0.71602523] 0.7179321647722781\n[0.71989341 0.7173275 0.71221903 0.71602523 0.71793216] 0.728203412383028\n[0.7173275 0.71221903 0.71602523 0.71793216 0.72820341] 0.7307072565059727\n[0.71221903 0.71602523 0.71793216 0.72820341 0.73070726] 0.735381665634954\n[0.71602523 0.71793216 0.72820341 0.73070726 0.73538167] 0.7425211214776973\n[0.71793216 0.72820341 0.73070726 0.73538167 0.74252112] 0.7425443996131316\n[0.72820341 0.73070726 0.73538167 0.74252112 0.7425444 ] 0.7522109777142538\n[0.73070726 0.73538167 0.74252112 0.7425444 0.75221098] 0.7643892533730033\n[0.73538167 0.74252112 0.7425444 0.75221098 0.76438925] 0.7577613537229623\n[0.74252112 0.7425444 0.75221098 0.76438925 0.75776135] 0.7635364700557136\n[0.7425444 0.75221098 0.76438925 0.75776135 0.76353647] 0.7703039454957399\n[0.75221098 0.76438925 0.75776135 0.76353647 0.77030395] 0.7822417724107047\n[0.76438925 0.75776135 0.76353647 0.77030395 0.78224177] 0.705420610771809\n[0.75776135 0.76353647 0.77030395 0.78224177 0.70542061] 0.6896067467892248\n[0.76353647 0.77030395 0.78224177 0.70542061 0.68960675] 0.6853664881805741\n[0.77030395 0.78224177 0.70542061 0.68960675 0.68536649] 0.7030330643034154\n[0.78224177 0.70542061 0.68960675 0.68536649 0.70303306] 0.7061260594374277\n[0.70542061 0.68960675 0.68536649 0.70303306 0.70612606] 0.6942733970565204\n[0.68960675 0.68536649 0.70303306 0.70612606 0.6942734 ] 0.6882037052182353\n[0.68536649 0.70303306 0.70612606 0.6942734 0.68820371] 0.6851882103295914\n[0.70303306 0.70612606 0.6942734 0.68820371 0.68518821] 0.6866533282203863\n[0.70612606 0.6942734 0.68820371 0.68518821 0.68665333] 0.6616612579917563\n[0.6942734 0.68820371 0.68518821 0.68665333 0.66166126] 0.6526767550726333\n[0.68820371 0.68518821 0.68665333 0.66166126 0.65267676] 0.6866067711743294\n[0.68518821 0.68665333 0.66166126 0.65267676 0.68660677] 0.698056320032026\n[0.68665333 0.66166126 0.65267676 0.68660677 0.69805632] 0.6851262293579714\n[0.66166126 0.65267676 0.68660677 0.69805632 0.68512623] 0.6669480024694311\n[0.65267676 0.68660677 0.69805632 0.68512623 0.666948 ] 0.6753045569683038\n[0.68660677 0.69805632 0.68512623 0.666948 0.67530456] 0.6766921746137302\n[0.69805632 0.68512623 0.666948 0.67530456 0.67669217] 0.6684363984956601\n[0.68512623 0.666948 0.67530456 0.67669217 0.6684364 ] 0.6627774844692746\n[0.666948 0.67530456 0.67669217 0.6684364 0.66277748] 0.6633046537335339\n[0.67530456 0.67669217 0.6684364 0.66277748 0.66330465] 0.6495915203506837\n[0.67669217 0.6684364 0.66277748 0.66330465 0.64959152] 0.6507465434621963\n[0.6684364 0.66277748 0.66330465 0.64959152 0.65074654] 0.639645886017776\n[0.66277748 0.66330465 0.64959152 0.65074654 0.63964589] 0.5873981618105512\n[0.66330465 0.64959152 0.65074654 0.63964589 0.58739816] 0.6004368858504678\n[0.64959152 0.65074654 0.63964589 0.58739816 0.60043689] 0.5920415339424124\n[0.65074654 0.63964589 0.58739816 0.60043689 0.59204153] 0.59368492968419\n[0.63964589 0.58739816 0.60043689 0.59204153 0.59368493] 0.6105065766996124\n[0.58739816 0.60043689 0.59204153 0.59368493 0.61050658] 0.621622753417781\n[0.60043689 0.59204153 0.59368493 0.61050658 0.62162275] 0.6203359141531574\n[0.59204153 0.59368493 0.61050658 0.62162275 0.62033591] 0.6190258905509078\n[0.59368493 0.61050658 0.62162275 0.62033591 0.61902589] 0.6281265965515851\n[0.61050658 0.62162275 0.62033591 0.61902589 0.6281266 ] 0.6254754177769694\n[0.62162275 0.62033591 0.61902589 0.6281266 0.62547542] 0.6310180341488036\n[0.62033591 0.61902589 0.6281266 0.62547542 0.63101803] 0.6396226078823418\n[0.61902589 0.6281266 0.62547542 0.63101803 0.63962261] 0.6386381029849377\n[0.6281266 0.62547542 0.63101803 0.63962261 0.6386381 ] 0.6456846346567494\n[0.62547542 0.63101803 0.63962261 0.6386381 0.64568463] 0.6537930761231485\n[0.63101803 0.63962261 0.6386381 0.64568463 0.65379308] 0.648862888897816\n[0.63962261 0.6386381 0.64568463 0.65379308 0.64886289] 0.6262506062758939\n[0.6386381 0.64568463 0.65379308 0.64886289 0.62625061] 0.6211731781966987\n[0.64568463 0.65379308 0.64886289 0.62625061 0.62117318] 0.6182351835534233\n[0.65379308 0.64886289 0.62625061 0.62117318 0.61823518] 0.6220336258026775\n[0.64886289 0.62625061 0.62117318 0.61823518 0.62203363] 0.6352427734837058\n[0.62625061 0.62117318 0.61823518 0.62203363 0.63524277] 0.6455528185036383\n[0.62117318 0.61823518 0.62203363 0.63524277 0.64555282] 0.6535604815892347\n[0.61823518 0.62203363 0.63524277 0.64555282 0.65356048] 0.6613201262153541\n[0.62203363 0.63524277 0.64555282 0.65356048 0.66132013] 0.6495062612436294\n[0.63524277 0.64555282 0.65356048 0.66132013 0.64950626] 0.6560798442107394\n[0.64555282 0.65356048 0.66132013 0.64950626 0.65607984] 0.6682115628234321\n[0.65356048 0.66132013 0.64950626 0.65607984 0.66821156] 0.671103001195839\n[0.66132013 0.64950626 0.65607984 0.66821156 0.671103 ] 0.6716379300969724\n[0.64950626 0.65607984 0.66821156 0.671103 0.67163793] 0.6758549105701888\n[0.65607984 0.66821156 0.671103 0.67163793 0.67585491] 0.6783665135548193\n[0.66821156 0.671103 0.67163793 0.67585491 0.67836651] 0.6727619162113649\n[0.671103 0.67163793 0.67585491 0.67836651 0.67276192] 0.6727463969376165\n[0.67163793 0.67585491 0.67836651 0.67276192 0.6727464 ] 0.6601651031038416\n[0.67585491 0.67836651 0.67276192 0.6727464 0.6601651 ] 0.6663433321846151\n[0.67836651 0.67276192 0.6727464 0.6601651 0.66634333] 0.6726533781936881\n[0.67276192 0.6727464 0.6601651 0.66634333 0.67265338] 0.6661418707711949\n[0.6727464 0.6601651 0.66634333 0.67265338 0.66614187] 0.6617309986002505\n[0.6601651 0.66634333 0.67265338 0.66614187 0.661731 ] 0.7534280763703832\n[0.66634333 0.67265338 0.66614187 0.661731 0.75342808] 0.7449010028822135\n[0.67265338 0.66614187 0.661731 0.75342808 0.744901 ] 0.7338390482739792\n[0.66614187 0.661731 0.75342808 0.744901 0.73383905] 0.7272809838054292\n[0.661731 0.75342808 0.744901 0.73383905 0.72728098] 0.7213352539103842\n[0.75342808 0.744901 0.73383905 0.72728098 0.72133525] 0.709691907152768\n[0.744901 0.73383905 0.72728098 0.72133525 0.70969191] 0.6773897682942894\n[0.73383905 0.72728098 0.72133525 0.70969191 0.67738977] 0.6910563454062711\n[0.72728098 0.72133525 0.70969191 0.67738977 0.69105635] 0.6941881371742776\n[0.72133525 0.70969191 0.67738977 0.69105635 0.69418814] 0.7180717405622634\n[0.70969191 0.67738977 0.69105635 0.69418814 0.71807174] 0.7050562954329693\n[0.67738977 0.69105635 0.69418814 0.71807174 0.7050563 ] 0.6947462504130368\n[0.69105635 0.69418814 0.71807174 0.7050563 0.69474625] 0.7122345486848953\n[0.69418814 0.71807174 0.7050563 0.69474625 0.71223455] 0.6866688467189462\n[0.71807174 0.7050563 0.69474625 0.71223455 0.68666885] 0.6889711340802855\n[0.7050563 0.69474625 0.71223455 0.68666885 0.68897113] 0.6969865568027561\n[0.69474625 0.71223455 0.68666885 0.68897113 0.69698656] 0.7131492176256199\n[0.71223455 0.68666885 0.68897113 0.69698656 0.71314922] 0.7009322391306846\n[0.68666885 0.68897113 0.69698656 0.71314922 0.70093224] 0.7075678984175998\n[0.68897113 0.69698656 0.71314922 0.70093224 0.7075679 ] 0.7062345966799161\n[0.69698656 0.71314922 0.70093224 0.7075679 0.7062346 ] 0.6765913962329275\n[0.71314922 0.70093224 0.7075679 0.7062346 0.6765914 ] 0.690234695209475\n[0.70093224 0.7075679 0.7062346 0.6765914 0.6902347 ] 0.6894207093013679\n[0.7075679 0.7062346 0.6765914 0.6902347 0.68942071] 0.6918858505881267\n[0.7062346 0.6765914 0.6902347 0.68942071 0.69188585] 0.708808181411355\n[0.6765914 0.6902347 0.68942071 0.69188585 0.70880818] 0.7051260360414635\n[0.6902347 0.68942071 0.69188585 0.70880818 0.70512604] 0.6898471009600128\n[0.68942071 0.69188585 0.70880818 0.70512604 0.6898471 ] 0.6999400699445915\n[0.69188585 0.70880818 0.70512604 0.6898471 0.69994007] 0.7231724467768929\n[0.70880818 0.70512604 0.6898471 0.69994007 0.72317245] 0.7181725189430661\n[0.70512604 0.6898471 0.69994007 0.72317245 0.71817252] 0.7177694054198555\n[0.6898471 0.69994007 0.72317245 0.71817252 0.71776941] 0.7190019287767364\n[0.69994007 0.72317245 0.71817252 0.71776941 0.71900193] 0.7299863839147908\n[0.72317245 0.71817252 0.71776941 0.71900193 0.72998638] 0.740901003871354\n[0.71817252 0.71776941 0.71900193 0.72998638 0.740901 ] 0.7450173005367646\n[0.71776941 0.71900193 0.72998638 0.740901 0.7450173 ] 0.7386142357837632\n[0.71900193 0.72998638 0.740901 0.7450173 0.73861424] 0.7369475611313631\n[0.72998638 0.740901 0.7450173 0.73861424 0.73694756] 0.7394746833897419\n[0.740901 0.7450173 0.73861424 0.73694756 0.73947468] 0.744358314344206\n[0.7450173 0.73861424 0.73694756 0.73947468 0.74435831] 0.737552231416179\n[0.73861424 0.73694756 0.73947468 0.74435831 0.73755223] 0.7407304848820572\n[0.73694756 0.73947468 0.74435831 0.73755223 0.74073048] 0.7288933417748983\n[0.73947468 0.74435831 0.73755223 0.74073048 0.72889334] 0.7504125822569279\n[0.74435831 0.73755223 0.74073048 0.72889334 0.75041258] 0.7464358613815025\n[0.73755223 0.74073048 0.72889334 0.75041258 0.74643586] 0.7338002508647965\n[0.74073048 0.72889334 0.75041258 0.74643586 0.73380025] 0.7290793792627551\n[0.72889334 0.75041258 0.74643586 0.73380025 0.72907938] 0.7183042405231803\n[0.75041258 0.74643586 0.73380025 0.72907938 0.71830424] 0.6962346464392658\n[0.74643586 0.73380025 0.72907938 0.71830424 0.69623465] 0.704916719642984\n[0.73380025 0.72907938 0.71830424 0.69623465 0.70491672] 0.72132749427351\n[0.72907938 0.71830424 0.69623465 0.70491672 0.72132749] 0.7203042873151085\n[0.71830424 0.69623465 0.70491672 0.72132749 0.72030429] 0.7059245026758223\n[0.69623465 0.70491672 0.72132749 0.72030429 0.7059245 ] 0.7161415289518264\n[0.70491672 0.72132749 0.72030429 0.7059245 0.71614153] 0.7210717161771588\n[0.72132749 0.72030429 0.7059245 0.71614153 0.72107172] 0.7263274221073367\n[0.72030429 0.7059245 0.71614153 0.72107172 0.72632742] 0.727637540282583\n[0.7059245 0.71614153 0.72107172 0.72632742 0.72763754] 0.7476916628738178\n[0.71614153 0.72107172 0.72632742 0.72763754 0.74769166] 0.7481800355040827\n[0.72107172 0.72632742 0.72763754 0.74769166 0.74818004] 0.7554900095609346\n[0.72632742 0.72763754 0.74769166 0.74818004 0.75549001] 0.7496141148473807\n[0.72763754 0.74769166 0.74818004 0.75549001 0.74961411] 0.7501257656130799\n[0.74769166 0.74818004 0.75549001 0.74961411 0.75012577] 0.7475289035213951\n[0.74818004 0.75549001 0.74961411 0.75012577 0.7475289 ] 0.7601876976005385\n[0.75549001 0.74961411 0.75012577 0.7475289 0.7601877 ] 0.7616295365807106\n[0.74961411 0.75012577 0.7475289 0.7601877 0.76162954] 0.7648388285940856\n[0.75012577 0.7475289 0.7601877 0.76162954 0.76483883] 0.7841177626863959\n[0.7475289 0.7601877 0.76162954 0.76483883 0.78411776] 0.7628930977099001\n[0.7601877 0.76162954 0.76483883 0.78411776 0.7628931 ] 0.7618621311146244\n[0.76162954 0.76483883 0.78411776 0.7628931 0.76186213] 0.7609473668257146\n[0.76483883 0.78411776 0.7628931 0.76186213 0.76094737] 0.7715131899419982\n[0.78411776 0.7628931 0.76186213 0.76094737 0.77151319] 0.7851797670539801\n[0.7628931 0.76186213 0.76094737 0.77151319 0.78517977] 0.7856914178196791\n[0.76186213 0.76094737 0.77151319 0.78517977 0.78569142] 0.7855131399686964\n[0.76094737 0.77151319 0.78517977 0.78569142 0.78551314] 0.7987378061482849\n[0.77151319 0.78517977 0.78569142 0.78551314 0.79873781] 0.8006835370324704\n[0.78517977 0.78569142 0.78551314 0.79873781 0.80068354] 0.7912416992553909\n[0.78569142 0.78551314 0.79873781 0.80068354 0.7912417 ] 0.7909394594611682\n[0.78551314 0.79873781 0.80068354 0.7912417 0.79093946] 0.7903192706777921\n[0.79873781 0.80068354 0.7912417 0.79093946 0.79031927] 0.8007532776409646\n[0.80068354 0.7912417 0.79093946 0.79031927 0.80075328] 0.8189004667571965\n[0.7912417 0.79093946 0.79031927 0.80075328 0.81890047] 0.8079160116191423\n[0.79093946 0.79031927 0.80075328 0.81890047 0.80791601] 0.8038540316366628\n[0.79031927 0.80075328 0.81890047 0.80791601 0.80385403] 0.7942340105815974\n[0.80075328 0.81890047 0.80791601 0.80385403 0.79423401] 0.7929161335446652\n[0.81890047 0.80791601 0.80385403 0.79423401 0.79291613] 0.7882572429142439\n[0.80791601 0.80385403 0.79423401 0.79291613 0.78825724] 0.7970556129973246\n[0.80385403 0.79423401 0.79291613 0.78825724 0.79705561] 0.8023734898203047\n[0.79423401 0.79291613 0.78825724 0.79705561 0.80237349] 0.8019393385247856\n[0.79291613 0.78825724 0.79705561 0.80237349 0.80193934] 0.7957145523979553\n[0.78825724 0.79705561 0.80237349 0.80193934 0.79571455] 0.7840557817147759\n[0.79705561 0.80237349 0.80193934 0.79571455 0.78405578] 0.7882107804411839\n[0.80237349 0.80193934 0.79571455 0.78405578 0.78821078] 0.8077920496759022\n[0.80193934 0.79571455 0.78405578 0.78821078 0.80779205] 0.8136757040263302\n[0.79571455 0.78405578 0.78821078 0.80779205 0.8136757 ] 0.8233578006260126\n[0.78405578 0.78821078 0.80779205 0.8136757 0.8233578 ] 0.8256369044249144\n[0.78821078 0.80779205 0.8136757 0.8233578 0.8256369 ] 0.8264895923940191\n[0.80779205 0.8136757 0.8233578 0.8256369 0.82648959] 0.8267686494009928\n[0.8136757 0.8233578 0.8256369 0.82648959 0.82676865] 0.8308383890203465\n[0.8233578 0.8256369 0.82648959 0.82676865 0.83083839] 0.8289468794709068\n[0.8256369 0.82648959 0.82676865 0.83083839 0.82894688] 0.8392879622631478\n[0.82648959 0.82676865 0.83083839 0.82894688 0.83928796] 0.8345980338604181\n[0.82676865 0.83083839 0.82894688 0.83928796 0.83459803] 0.8326600626131068\n[0.83083839 0.82894688 0.83928796 0.83459803 0.83266006] 0.8353112413877225\n[0.82894688 0.83928796 0.83459803 0.83266006 0.83531124] 0.8303112189808989\n[0.83928796 0.83459803 0.83266006 0.83531124 0.83031122] 0.8297298276067057\n[0.83459803 0.83266006 0.83531124 0.83031122 0.82972983] 0.8256369044249144\n[0.83266006 0.83531124 0.83031122 0.82972983 0.8256369 ] 0.8386910516152059\n[0.83531124 0.83031122 0.82972983 0.8256369 0.83869105] 0.8320941905126616\n[0.83031122 0.82972983 0.8256369 0.83869105 0.83209419] 0.8198849716546005\n[0.82972983 0.8256369 0.83869105 0.83209419 0.81988497] 0.8205671414095966\n[0.8256369 0.83869105 0.83209419 0.81988497 0.82056714] 0.8440940929722435\n[0.83869105 0.83209419 0.81988497 0.82056714 0.84409409] 0.8388926083768112\n[0.83209419 0.81988497 0.82056714 0.84409409 0.83889261] 0.8649001259269685\n[0.81988497 0.82056714 0.84409409 0.83889261 0.86490013] 0.8642257158088466\n[0.82056714 0.84409409 0.83889261 0.86490013 0.86422572] 0.872737270023268\n[0.84409409 0.83889261 0.86490013 0.86422572 0.87273727] 0.884760451393472\n[0.83889261 0.86490013 0.86422572 0.87273727 0.88476045] 0.8924348361373488\n[0.86490013 0.86422572 0.87273727 0.88476045 0.89243484] 0.8997991268771318\n[0.86422572 0.87273727 0.88476045 0.89243484 0.89979913] 0.8933263222908244\n[0.87273727 0.88476045 0.89243484 0.89979913 0.89332632] 0.8997758487416975\n[0.88476045 0.89243484 0.89979913 0.89332632 0.89977585] 0.9094657049782541\n[0.89243484 0.89979913 0.89332632 0.89977585 0.9094657 ] 0.9317059126244623\n[0.89979913 0.89332632 0.89977585 0.9094657 0.93170591] 0.9348144254818461\n[0.89332632 0.89977585 0.9094657 0.93170591 0.93481443] 0.9360159110664187\n[0.89977585 0.9094657 0.93170591 0.93481443 0.93601591] 0.9365585996044262\n[0.9094657 0.93170591 0.93481443 0.93601591 0.9365586 ] 0.9211012920989959\n[0.93170591 0.93481443 0.93601591 0.9365586 0.92110129] 0.8956674062861579\n[0.93481443 0.93601591 0.9365586 0.92110129 0.89566741] 0.9101324508076868\n[0.93601591 0.9365586 0.92110129 0.89566741 0.91013245] 0.9148378031359797\n[0.9365586 0.92110129 0.89566741 0.91013245 0.9148378 ] 0.9126749969916289\n[0.92110129 0.89566741 0.91013245 0.9148378 0.912675 ] 0.8959231843825092\n[0.89566741 0.91013245 0.9148378 0.912675 0.89592318] 0.9360081514295445\n[0.91013245 0.9148378 0.912675 0.89592318 0.93600815] 0.905876578352291\n[0.9148378 0.912675 0.89592318 0.93600815 0.90587658] 0.9067758233674524\n[0.912675 0.89592318 0.93600815 0.90587658 0.90677582] 0.928481101337339\n[0.89592318 0.93600815 0.90587658 0.90677582 0.9284811 ] 0.9308066668341125\n[0.93600815 0.90587658 0.90677582 0.9284811 0.93080667] 0.9536360254913884\n[0.90587658 0.90677582 0.9284811 0.93080667 0.95363603] 0.9537212845984425\n[0.90677582 0.9284811 0.93080667 0.95363603 0.95372128] 0.9610700568396654\n[0.9284811 0.93080667 0.95363603 0.95372128 0.96107006] 0.9582716372111869\n[0.93080667 0.95363603 0.95372128 0.96107006 0.95827164] 0.9629847491763542\n[0.95363603 0.95372128 0.96107006 0.95827164 0.96298475] 0.9621553393426837\n[0.95372128 0.96107006 0.95827164 0.96298475 0.96215534] 0.9675970827607187\n[0.96107006 0.95827164 0.96298475 0.96215534 0.96759708] 0.9609770373205487\n[0.95827164 0.96298475 0.96215534 0.96759708 0.96097704] 0.9353647790837311\n[0.96298475 0.96215534 0.96759708 0.96097704 0.93536478] 0.8861247909034643\n[0.96215534 0.96759708 0.96097704 0.93536478 0.88612479] 0.8604350324212783\n[0.96759708 0.96097704 0.93536478 0.88612479 0.86043503] 0.8641017538656065\n[0.96097704 0.93536478 0.88612479 0.86043503 0.86410175] 0.8058927812647766\n[0.93536478 0.88612479 0.86043503 0.86410175 0.80589278] 0.8223577772300484\n[0.88612479 0.86043503 0.86410175 0.80589278 0.82235778] 0.8609466831869773\n[0.86043503 0.86410175 0.80589278 0.82235778 0.86094668] 0.8239547112739544\n[0.86410175 0.80589278 0.82235778 0.86094668 0.82395471] 0.8589389721063604\n[0.80589278 0.82235778 0.86094668 0.82395471 0.85893897] 0.8066292669275154\n[0.82235778 0.86094668 0.82395471 0.85893897 0.80662927] 0.7906371243187601\n[0.86094668 0.82395471 0.85893897 0.80662927 0.79063712] 0.7264127765625762\n[0.82395471 0.85893897 0.80662927 0.79063712 0.72641278] 0.7766682128395588\n[0.85893897 0.80662927 0.79063712 0.72641278 0.77666821] 0.7262964789080251\n[0.80662927 0.79063712 0.72641278 0.77666821 0.72629648] 0.6483900347661111\n[0.79063712 0.72641278 0.77666821 0.72629648 0.64839003] 0.7296452513632001\n[0.72641278 0.77666821 0.72629648 0.64839003 0.72964525] 0.6246847100042965\n[0.77666821 0.72629648 0.64839003 0.72964525 0.62468471] 0.6521807181536794\n[0.72629648 0.64839003 0.72964525 0.62468471 0.65218072] 0.6343513826784155\n[0.64839003 0.72964525 0.62468471 0.65218072 0.63435138] 0.6486846102716448\n[0.72964525 0.62468471 0.65218072 0.63435138 0.64868461] 0.615374688380328\n[0.62468471 0.65218072 0.63435138 0.64868461 0.61537469] 0.6032042669314495\n[0.65218072 0.63435138 0.64868461 0.61537469 0.60320427] 0.663544913331325\n[0.63435138 0.64868461 0.61537469 0.60320427 0.66354491] 0.6387621595011747\n[0.64868461 0.61537469 0.60320427 0.66354491 0.63876216] 0.6846998376993265\n[0.61537469 0.60320427 0.66354491 0.63876216 0.68469984] 0.6451341864818677\n[0.60320427 0.66354491 0.63876216 0.68469984 0.64513419] 0.6731262315502046\n[0.66354491 0.63876216 0.68469984 0.64513419 0.67312623] 0.6855215832443079\n[0.63876216 0.68469984 0.64513419 0.67312623 0.68552158] 0.6411885033787509\n[0.68469984 0.64513419 0.67312623 0.68552158 0.6411885 ] 0.6529868498519156\n[0.64513419 0.67312623 0.68552158 0.6411885 0.65298685] 0.6351885521489601\n[0.67312623 0.68552158 0.6411885 0.65298685 0.63518855] 0.7042113663255506\n[0.68552158 0.6411885 0.65298685 0.63518855 0.70421137] 0.7038935126845826\n[0.6411885 0.65298685 0.63518855 0.70421137 0.70389351] 0.7223197580325998\n[0.65298685 0.63518855 0.70421137 0.70389351 0.72231976] 0.7232266681116386\n[0.63518855 0.70421137 0.70389351 0.72231976 0.72322667] 0.7279631535604252\n[0.70421137 0.70389351 0.72231976 0.72322667 0.72796315] 0.7680170820599638\n[0.70389351 0.72231976 0.72322667 0.72796315 0.76801708] 0.7627768008305374\n[0.72231976 0.72322667 0.72796315 0.76801708 0.7627768 ] 0.763551989329462\n[0.72322667 0.72796315 0.76801708 0.7627768 0.76355199] 0.7788852403186555\n[0.72796315 0.76801708 0.7627768 0.76355199 0.77888524] 0.765986092068724\n[0.76801708 0.7627768 0.76355199 0.77888524 0.76598609] 0.727017351499207\n[0.7627768 0.76355199 0.77888524 0.76598609 0.72701735] 0.7633504325678566\n[0.76355199 0.77888524 0.76598609 0.72701735 0.76335043] 0.7735054778722407\n[0.77888524 0.76598609 0.72701735 0.76335043 0.77350548] 0.7758310433690142\n[0.76598609 0.72701735 0.76335043 0.77350548 0.77583104] 0.7731721049575242\n[0.72701735 0.76335043 0.77350548 0.77583104 0.7731721 ] 0.7404514286502718\n[0.76335043 0.77350548 0.77583104 0.7731721 0.74045143] 0.8240244511072602\n[0.77350548 0.77583104 0.7731721 0.74045143 0.82402445] 0.8295903463897171\n[0.77583104 0.7731721 0.74045143 0.82402445 0.82959035] 0.8078462710106479\n[0.7731721 0.74045143 0.82402445 0.82959035 0.80784627] 0.8126447374310545\n[0.74045143 0.82402445 0.82959035 0.80784627 0.81264474] 0.8314895202278457\n[0.82402445 0.82959035 0.80784627 0.81264474 0.83148952] 0.8285361016590073\n[0.82959035 0.80784627 0.81264474 0.83148952 0.8285361 ] 0.8481173708937254\n[0.80784627 0.81264474 0.83148952 0.8285361 0.84811737] 0.8603730514496583\n[0.81264474 0.83148952 0.8285361 0.84811737 0.86037305] 0.8719156198264718\n[0.83148952 0.8285361 0.84811737 0.86037305 0.87191562] 0.8505824168322992\n[0.8285361 0.84811737 0.86037305 0.87191562 0.85058242] 0.8301096622192937\n[0.84811737 0.86037305 0.87191562 0.85058242 0.83010966] 0.8353809819962167\n[0.86037305 0.87191562 0.85058242 0.83010966 0.83538098] 0.8486056481758052\n[0.87191562 0.85058242 0.83010966 0.83538098 0.84860565] 0.8569389245392437\n[0.85058242 0.83010966 0.83538098 0.84860565 0.85693892] 0.8488343628912819\n[0.83010966 0.83538098 0.84860565 0.85693892 0.84883436] 0.8745977418003991\n[0.83538098 0.84860565 0.85693892 0.84883436 0.87459774] 0.8715590633493179\n[0.84860565 0.85693892 0.84883436 0.87459774 0.87155906] 0.8774659958351804\n[0.85693892 0.84883436 0.87459774 0.87155906 0.877466 ] 0.882582221323558\n[0.84883436 0.87459774 0.87155906 0.877466 0.88258222] 0.8832178340324973\n[0.87459774 0.87155906 0.877466 0.88258222 0.88321783] 0.8823573856513302\n[0.87155906 0.877466 0.88258222 0.88321783 0.88235739] 0.8918069830652838\n[0.877466 0.88258222 0.88321783 0.88235739 0.89180698] 0.894054953743692\n[0.88258222 0.88321783 0.88235739 0.89180698 0.89405495] 0.899791368015446\n[0.88321783 0.88235739 0.89180698 0.89405495 0.89979137] 0.8975898590349093\n[0.88235739 0.89180698 0.89405495 0.89979137 0.89758986] 0.8788303353451724\n[0.89180698 0.89405495 0.89979137 0.89758986 0.87883034] 0.8991479956696325\n[0.89405495 0.89979137 0.89758986 0.87883034 0.899148 ] 0.9055200218751372\n[0.89979137 0.89758986 0.87883034 0.899148 0.90552002] 0.9129231100241028\n[0.89758986 0.87883034 0.899148 0.90552002 0.91292311] 0.9204346416177484\n[0.87883034 0.899148 0.90552002 0.91292311 0.92043464] 0.872365195047554\n[0.899148 0.90552002 0.91292311 0.92043464 0.8723652 ] 0.8796055238440968\n[0.90552002 0.91292311 0.92043464 0.8723652 0.87960552] 0.8847759706672206\n[0.91292311 0.92043464 0.8723652 0.87960552 0.88477597] 0.9025045277616818\n[0.92043464 0.8723652 0.87960552 0.88477597 0.90250453] 0.9090161297571718\n[0.8723652 0.87960552 0.88477597 0.90250453 0.90901613] 0.8972642457570671\n[0.87960552 0.88477597 0.90250453 0.90901613 0.89726425] 0.893977454273512\n[0.88477597 0.90250453 0.90901613 0.89726425 0.89397745] 0.9095897614944909\n[0.90250453 0.90901613 0.89726425 0.89397745 0.90958976] 0.9193184151402299\n[0.90901613 0.89726425 0.89397745 0.90958976 0.91931842] 0.8941712513982433\n[0.89726425 0.89397745 0.90958976 0.91931842 0.89417125] 0.9014270041203493\n[0.89397745 0.90958976 0.91931842 0.89417125 0.901427 ] 0.8383034573657436\n[0.90958976 0.91931842 0.89417125 0.901427 0.83830346] 0.8654892769380361\n[0.91931842 0.89417125 0.901427 0.83830346 0.86548928] 0.8799388014106281\n[0.89417125 0.901427 0.83830346 0.86548928 0.8799388 ] 0.898876698299533\n[0.901427 0.83830346 0.86548928 0.8799388 0.8988767 ] 0.9195431554642727\n[0.83830346 0.86548928 0.8799388 0.8988767 0.91954316] 0.9435739989309327\n[0.86548928 0.8799388 0.8988767 0.91954316 0.943574 ] 0.9354190957666623\n[0.8799388 0.8988767 0.91954316 0.943574 0.9354191 ] 0.9438065934648466\n[0.8988767 0.91954316 0.943574 0.9354191 0.94380659] 0.9554266613118402\n[0.91954316 0.943574 0.9354191 0.94380659 0.95542666] 0.979263707653769\n[0.943574 0.9354191 0.94380659 0.95542666 0.97926371] 0.9556979586819397\n[0.9354191 0.94380659 0.95542666 0.97926371 0.95569796] 0.9628606926601172\n[0.94380659 0.95542666 0.97926371 0.95569796 0.96286069] 0.9574809302137025\n[0.95542666 0.97926371 0.95569796 0.96286069 0.95748093] 0.9608607404411859\n[0.97926371 0.95569796 0.96286069 0.95748093 0.96086074] 0.9589615666030572\n[0.95569796 0.96286069 0.95748093 0.96086074 0.95896157] 0.9978527131293974\n[0.96286069 0.95748093 0.96086074 0.95896157 0.99785271] 0.992193893676009\n[0.95748093 0.96086074 0.95896157 0.99785271 0.99219389] 0.9999999999999999\n[0.96086074 0.95896157 0.99785271 0.99219389 1. ] 0.9590623449838599\n[0.95896157 0.99785271 0.99219389 1. 0.95906234] 0.9561088310668361\n[0.99785271 0.99219389 1. 0.95906234 0.95610883] 0.9703180021438286\n[0.99219389 1. 0.95906234 0.95610883 0.970318 ] 0.94717088519377\n[1. 0.95906234 0.95610883 0.970318 0.94717089] 0.9639770137106324\n[0.95906234 0.95610883 0.970318 0.94717089 0.96397701] 0.9712869877674842\n[0.95610883 0.970318 0.94717089 0.96397701 0.97128699] 0.9336981052065193\n[0.970318 0.94717089 0.96397701 0.97128699 0.93369811] 0.9271012433287867\n[0.94717089 0.96397701 0.97128699 0.93369811 0.92710124] 0.9197524718627522\n[0.96397701 0.97128699 0.93369811 0.92710124 0.91975247] 0.9264501113460991\n[0.97128699 0.93369811 0.92710124 0.91975247 0.92645011] 0.9469848477059132\n[0.93369811 0.92710124 0.91975247 0.92645011 0.94698485] 0.9426360510795856\n[0.92710124 0.91975247 0.92645011 0.94698485 0.94263605] 0.943884093710215\n[0.91975247 0.92645011 0.94698485 0.94263605 0.94388409] 0.9316515959415311\n"
],
[
"n_features = 1\nX = X.reshape((X.shape[0], X.shape[1], n_features))",
"_____no_output_____"
],
[
"regressor = Sequential()\nregressor.add(LSTM(units = 50, return_sequences = True, input_shape = (n_steps, n_features)))\nregressor.add(Dropout(0.2))\nregressor.add(LSTM(units = 50, return_sequences = True))\nregressor.add(Dropout(0.2))\nregressor.add(LSTM(units = 50, return_sequences = True))\nregressor.add(Dropout(0.2))\nregressor.add(LSTM(units = 50))\nregressor.add(Dropout(0.2))\nregressor.add(Dense(units = 1))",
"_____no_output_____"
],
[
"regressor.compile(optimizer = 'adam', loss = 'mean_squared_error',metrics='acc')\n# fit model\nregressor.fit(X, y, epochs=200,validation_split=0.33,verbose=1)",
"Epoch 1/200\n46/46 [==============================] - 31s 66ms/step - loss: 0.0413 - acc: 4.1446e-04 - val_loss: 0.0021 - val_acc: 0.0014\nEpoch 2/200\n46/46 [==============================] - 1s 19ms/step - loss: 0.0018 - acc: 0.0023 - val_loss: 0.0031 - val_acc: 0.0014\nEpoch 3/200\n46/46 [==============================] - 1s 18ms/step - loss: 0.0010 - acc: 0.0017 - val_loss: 0.0024 - val_acc: 0.0014\nEpoch 4/200\n46/46 [==============================] - 1s 19ms/step - loss: 0.0012 - acc: 7.8972e-04 - val_loss: 0.0018 - val_acc: 0.0014\nEpoch 5/200\n46/46 [==============================] - 1s 18ms/step - loss: 9.8838e-04 - acc: 0.0020 - val_loss: 0.0029 - val_acc: 0.0014\nEpoch 6/200\n46/46 [==============================] - 1s 19ms/step - loss: 8.4332e-04 - acc: 2.7397e-04 - val_loss: 0.0016 - val_acc: 0.0014\nEpoch 7/200\n46/46 [==============================] - 1s 18ms/step - loss: 9.7064e-04 - acc: 2.5319e-04 - val_loss: 0.0031 - val_acc: 0.0014\nEpoch 8/200\n46/46 [==============================] - 1s 18ms/step - loss: 7.5974e-04 - acc: 4.4105e-04 - val_loss: 0.0053 - val_acc: 0.0014\nEpoch 9/200\n46/46 [==============================] - 1s 18ms/step - loss: 7.2245e-04 - acc: 5.5955e-04 - val_loss: 0.0028 - val_acc: 0.0014\nEpoch 10/200\n46/46 [==============================] - 1s 18ms/step - loss: 7.6906e-04 - acc: 4.4265e-05 - val_loss: 0.0013 - val_acc: 0.0014\nEpoch 11/200\n46/46 [==============================] - 1s 18ms/step - loss: 7.8485e-04 - acc: 7.4539e-04 - val_loss: 0.0017 - val_acc: 0.0014\nEpoch 12/200\n46/46 [==============================] - 1s 18ms/step - loss: 6.7681e-04 - acc: 1.5805e-04 - val_loss: 0.0025 - val_acc: 0.0014\nEpoch 13/200\n46/46 [==============================] - 1s 18ms/step - loss: 6.2592e-04 - acc: 2.3305e-04 - val_loss: 0.0013 - val_acc: 0.0014\nEpoch 14/200\n46/46 [==============================] - 1s 18ms/step - loss: 6.9603e-04 - acc: 3.6426e-04 - val_loss: 0.0012 - val_acc: 0.0014\nEpoch 15/200\n46/46 [==============================] - 1s 18ms/step - loss: 6.9394e-04 - acc: 0.0023 - val_loss: 0.0022 - val_acc: 0.0014\nEpoch 16/200\n46/46 [==============================] - 1s 18ms/step - loss: 6.3254e-04 - acc: 5.9279e-04 - val_loss: 0.0047 - val_acc: 0.0014\nEpoch 17/200\n46/46 [==============================] - 1s 19ms/step - loss: 6.8627e-04 - acc: 6.6473e-04 - val_loss: 0.0036 - val_acc: 0.0014\nEpoch 18/200\n46/46 [==============================] - 1s 18ms/step - loss: 5.5924e-04 - acc: 1.9449e-04 - val_loss: 0.0052 - val_acc: 0.0014\nEpoch 19/200\n46/46 [==============================] - 1s 18ms/step - loss: 6.3536e-04 - acc: 0.0023 - val_loss: 0.0053 - val_acc: 0.0014\nEpoch 20/200\n46/46 [==============================] - 1s 18ms/step - loss: 6.0326e-04 - acc: 4.1446e-04 - val_loss: 0.0029 - val_acc: 0.0014\nEpoch 21/200\n46/46 [==============================] - 1s 18ms/step - loss: 5.5868e-04 - acc: 0.0017 - val_loss: 0.0010 - val_acc: 0.0014\nEpoch 22/200\n46/46 [==============================] - 1s 18ms/step - loss: 6.8601e-04 - acc: 2.9542e-04 - val_loss: 0.0032 - val_acc: 0.0014\nEpoch 23/200\n46/46 [==============================] - 1s 18ms/step - loss: 5.6357e-04 - acc: 6.6473e-04 - val_loss: 0.0022 - val_acc: 0.0014\nEpoch 24/200\n46/46 [==============================] - 1s 18ms/step - loss: 5.8197e-04 - acc: 7.8972e-04 - val_loss: 0.0023 - val_acc: 0.0014\nEpoch 25/200\n46/46 [==============================] - 1s 18ms/step - loss: 5.5960e-04 - acc: 7.8972e-04 - val_loss: 0.0015 - val_acc: 0.0014\nEpoch 26/200\n46/46 [==============================] - 1s 18ms/step - loss: 5.6944e-04 - acc: 5.5955e-04 - val_loss: 0.0016 - val_acc: 0.0014\nEpoch 27/200\n46/46 [==============================] - 1s 18ms/step - loss: 5.5081e-04 - acc: 0.0023 - val_loss: 0.0031 - val_acc: 0.0014\nEpoch 28/200\n46/46 [==============================] - 1s 18ms/step - loss: 5.3112e-04 - acc: 1.7602e-04 - val_loss: 0.0012 - val_acc: 0.0014\nEpoch 29/200\n46/46 [==============================] - 1s 19ms/step - loss: 5.7341e-04 - acc: 1.5805e-04 - val_loss: 0.0020 - val_acc: 0.0014\nEpoch 30/200\n46/46 [==============================] - 1s 18ms/step - loss: 5.0962e-04 - acc: 4.6876e-04 - val_loss: 0.0044 - val_acc: 0.0014\nEpoch 31/200\n46/46 [==============================] - 1s 18ms/step - loss: 4.9971e-04 - acc: 4.1446e-04 - val_loss: 0.0024 - val_acc: 0.0014\nEpoch 32/200\n46/46 [==============================] - 1s 18ms/step - loss: 5.2881e-04 - acc: 6.6473e-04 - val_loss: 0.0012 - val_acc: 0.0014\nEpoch 33/200\n46/46 [==============================] - 1s 18ms/step - loss: 5.0763e-04 - acc: 5.9376e-05 - val_loss: 0.0042 - val_acc: 0.0014\nEpoch 34/200\n46/46 [==============================] - 1s 18ms/step - loss: 5.0754e-04 - acc: 8.8836e-04 - val_loss: 0.0025 - val_acc: 0.0014\nEpoch 35/200\n46/46 [==============================] - 1s 18ms/step - loss: 4.5649e-04 - acc: 2.9542e-04 - val_loss: 0.0031 - val_acc: 0.0014\nEpoch 36/200\n46/46 [==============================] - 1s 18ms/step - loss: 6.4731e-04 - acc: 1.2351e-04 - val_loss: 0.0025 - val_acc: 0.0014\nEpoch 37/200\n46/46 [==============================] - 1s 18ms/step - loss: 4.9394e-04 - acc: 3.1758e-04 - val_loss: 0.0028 - val_acc: 0.0014\nEpoch 38/200\n46/46 [==============================] - 1s 18ms/step - loss: 5.4598e-04 - acc: 8.3721e-04 - val_loss: 0.0024 - val_acc: 0.0014\nEpoch 39/200\n46/46 [==============================] - 1s 18ms/step - loss: 5.5283e-04 - acc: 5.5955e-04 - val_loss: 0.0011 - val_acc: 0.0014\nEpoch 40/200\n46/46 [==============================] - 1s 18ms/step - loss: 6.2801e-04 - acc: 3.1758e-04 - val_loss: 0.0012 - val_acc: 0.0014\nEpoch 41/200\n46/46 [==============================] - 1s 18ms/step - loss: 5.6773e-04 - acc: 2.5319e-04 - val_loss: 0.0027 - val_acc: 0.0014\nEpoch 42/200\n46/46 [==============================] - 1s 18ms/step - loss: 4.7323e-04 - acc: 3.6426e-04 - val_loss: 0.0030 - val_acc: 0.0014\nEpoch 43/200\n46/46 [==============================] - 1s 18ms/step - loss: 4.8035e-04 - acc: 0.0016 - val_loss: 0.0015 - val_acc: 0.0014\nEpoch 44/200\n46/46 [==============================] - 1s 18ms/step - loss: 4.6680e-04 - acc: 0.0014 - val_loss: 0.0020 - val_acc: 0.0014\nEpoch 45/200\n46/46 [==============================] - 1s 18ms/step - loss: 5.3190e-04 - acc: 3.8888e-04 - val_loss: 0.0012 - val_acc: 0.0014\nEpoch 46/200\n46/46 [==============================] - 1s 18ms/step - loss: 5.2846e-04 - acc: 5.5955e-04 - val_loss: 0.0044 - val_acc: 0.0014\nEpoch 47/200\n46/46 [==============================] - 1s 19ms/step - loss: 5.1831e-04 - acc: 2.9542e-04 - val_loss: 0.0024 - val_acc: 0.0014\nEpoch 48/200\n46/46 [==============================] - 1s 18ms/step - loss: 4.5759e-04 - acc: 5.9279e-04 - val_loss: 0.0020 - val_acc: 0.0014\nEpoch 49/200\n46/46 [==============================] - 1s 18ms/step - loss: 5.2579e-04 - acc: 5.2789e-04 - val_loss: 0.0043 - val_acc: 0.0014\nEpoch 50/200\n46/46 [==============================] - 1s 18ms/step - loss: 4.4296e-04 - acc: 0.0030 - val_loss: 0.0030 - val_acc: 0.0014\nEpoch 51/200\n46/46 [==============================] - 1s 18ms/step - loss: 4.3410e-04 - acc: 1.9449e-04 - val_loss: 0.0022 - val_acc: 0.0014\nEpoch 52/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.7174e-04 - acc: 3.8888e-04 - val_loss: 0.0029 - val_acc: 0.0014\nEpoch 53/200\n46/46 [==============================] - 1s 19ms/step - loss: 5.0620e-04 - acc: 1.5805e-04 - val_loss: 0.0019 - val_acc: 0.0014\nEpoch 54/200\n46/46 [==============================] - 1s 18ms/step - loss: 4.1658e-04 - acc: 5.9376e-05 - val_loss: 0.0041 - val_acc: 0.0014\nEpoch 55/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.9667e-04 - acc: 0.0020 - val_loss: 0.0028 - val_acc: 0.0014\nEpoch 56/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.8586e-04 - acc: 5.9376e-05 - val_loss: 0.0013 - val_acc: 0.0014\nEpoch 57/200\n46/46 [==============================] - 1s 18ms/step - loss: 4.2814e-04 - acc: 2.7397e-04 - val_loss: 0.0013 - val_acc: 0.0014\nEpoch 58/200\n46/46 [==============================] - 1s 18ms/step - loss: 4.6987e-04 - acc: 7.0384e-04 - val_loss: 0.0028 - val_acc: 0.0014\nEpoch 59/200\n46/46 [==============================] - 1s 18ms/step - loss: 4.1990e-04 - acc: 0.0017 - val_loss: 0.0035 - val_acc: 0.0014\nEpoch 60/200\n46/46 [==============================] - 1s 18ms/step - loss: 4.1223e-04 - acc: 5.5955e-04 - val_loss: 0.0036 - val_acc: 0.0014\nEpoch 61/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.9079e-04 - acc: 5.5955e-04 - val_loss: 0.0021 - val_acc: 0.0014\nEpoch 62/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.6457e-04 - acc: 5.9279e-04 - val_loss: 0.0038 - val_acc: 0.0014\nEpoch 63/200\n46/46 [==============================] - 1s 18ms/step - loss: 4.2380e-04 - acc: 2.3305e-04 - val_loss: 0.0016 - val_acc: 0.0014\nEpoch 64/200\n46/46 [==============================] - 1s 18ms/step - loss: 5.0003e-04 - acc: 4.1446e-04 - val_loss: 0.0012 - val_acc: 0.0014\nEpoch 65/200\n46/46 [==============================] - 1s 18ms/step - loss: 4.5458e-04 - acc: 3.8888e-04 - val_loss: 0.0039 - val_acc: 0.0014\nEpoch 66/200\n46/46 [==============================] - 1s 19ms/step - loss: 4.0022e-04 - acc: 4.4105e-04 - val_loss: 0.0016 - val_acc: 0.0014\nEpoch 67/200\n46/46 [==============================] - 1s 18ms/step - loss: 4.3361e-04 - acc: 5.9279e-04 - val_loss: 0.0021 - val_acc: 0.0014\nEpoch 68/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.5979e-04 - acc: 0.0020 - val_loss: 0.0024 - val_acc: 0.0014\nEpoch 69/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.7468e-04 - acc: 8.3721e-04 - val_loss: 0.0031 - val_acc: 0.0014\nEpoch 70/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.6588e-04 - acc: 1.5805e-04 - val_loss: 0.0020 - val_acc: 0.0014\nEpoch 71/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.9562e-04 - acc: 0.0014 - val_loss: 0.0011 - val_acc: 0.0014\nEpoch 72/200\n46/46 [==============================] - 1s 18ms/step - loss: 4.7218e-04 - acc: 4.1446e-04 - val_loss: 0.0017 - val_acc: 0.0014\nEpoch 73/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.6589e-04 - acc: 6.6473e-04 - val_loss: 0.0044 - val_acc: 0.0014\nEpoch 74/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.4550e-04 - acc: 7.0384e-04 - val_loss: 0.0022 - val_acc: 0.0014\nEpoch 75/200\n46/46 [==============================] - 1s 18ms/step - loss: 4.1661e-04 - acc: 5.2789e-04 - val_loss: 0.0025 - val_acc: 0.0014\nEpoch 76/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.3895e-04 - acc: 3.4051e-04 - val_loss: 0.0022 - val_acc: 0.0014\nEpoch 77/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.5793e-04 - acc: 8.3721e-04 - val_loss: 0.0023 - val_acc: 0.0014\nEpoch 78/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.5604e-04 - acc: 0.0010 - val_loss: 0.0015 - val_acc: 0.0014\nEpoch 79/200\n46/46 [==============================] - 1s 19ms/step - loss: 4.0179e-04 - acc: 9.4377e-04 - val_loss: 0.0022 - val_acc: 0.0014\nEpoch 80/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.2095e-04 - acc: 8.8836e-04 - val_loss: 0.0020 - val_acc: 0.0014\nEpoch 81/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.6513e-04 - acc: 2.9542e-04 - val_loss: 0.0030 - val_acc: 0.0014\nEpoch 82/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.4070e-04 - acc: 5.9376e-05 - val_loss: 0.0017 - val_acc: 0.0014\nEpoch 83/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.3555e-04 - acc: 4.6876e-04 - val_loss: 0.0023 - val_acc: 0.0014\nEpoch 84/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.5607e-04 - acc: 4.9766e-04 - val_loss: 0.0013 - val_acc: 0.0014\nEpoch 85/200\n46/46 [==============================] - 1s 18ms/step - loss: 4.1520e-04 - acc: 1.9449e-04 - val_loss: 0.0047 - val_acc: 0.0014\nEpoch 86/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.3960e-04 - acc: 8.3721e-04 - val_loss: 0.0037 - val_acc: 0.0014\nEpoch 87/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.9476e-04 - acc: 1.4056e-04 - val_loss: 0.0026 - val_acc: 0.0014\nEpoch 88/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.8652e-04 - acc: 7.4839e-05 - val_loss: 0.0020 - val_acc: 0.0014\nEpoch 89/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.2862e-04 - acc: 3.4051e-04 - val_loss: 0.0045 - val_acc: 0.0014\nEpoch 90/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.6485e-04 - acc: 1.7602e-04 - val_loss: 0.0023 - val_acc: 0.0014\nEpoch 91/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.2125e-04 - acc: 5.5955e-04 - val_loss: 0.0038 - val_acc: 0.0014\nEpoch 92/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.8297e-04 - acc: 3.1758e-04 - val_loss: 0.0021 - val_acc: 0.0014\nEpoch 93/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.3641e-04 - acc: 5.5955e-04 - val_loss: 0.0016 - val_acc: 0.0014\nEpoch 94/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.0913e-04 - acc: 6.6473e-04 - val_loss: 0.0025 - val_acc: 0.0014\nEpoch 95/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.9975e-04 - acc: 2.3305e-04 - val_loss: 0.0020 - val_acc: 0.0014\nEpoch 96/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.5121e-04 - acc: 0.0013 - val_loss: 0.0037 - val_acc: 0.0014\nEpoch 97/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.7385e-04 - acc: 7.4539e-04 - val_loss: 0.0013 - val_acc: 0.0014\nEpoch 98/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.5405e-04 - acc: 7.0384e-04 - val_loss: 0.0018 - val_acc: 0.0014\nEpoch 99/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.3375e-04 - acc: 0.0023 - val_loss: 0.0027 - val_acc: 0.0014\nEpoch 100/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.4064e-04 - acc: 8.3721e-04 - val_loss: 0.0018 - val_acc: 0.0014\nEpoch 101/200\n46/46 [==============================] - 1s 19ms/step - loss: 3.1996e-04 - acc: 5.9376e-05 - val_loss: 0.0034 - val_acc: 0.0014\nEpoch 102/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.0410e-04 - acc: 0.0014 - val_loss: 0.0028 - val_acc: 0.0014\nEpoch 103/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.4102e-04 - acc: 7.4539e-04 - val_loss: 0.0025 - val_acc: 0.0014\nEpoch 104/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.9412e-04 - acc: 0.0020 - val_loss: 0.0030 - val_acc: 0.0014\nEpoch 105/200\n46/46 [==============================] - 1s 19ms/step - loss: 3.2401e-04 - acc: 7.0384e-04 - val_loss: 0.0022 - val_acc: 0.0014\nEpoch 106/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.0212e-04 - acc: 5.2789e-04 - val_loss: 0.0012 - val_acc: 0.0014\nEpoch 107/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.4777e-04 - acc: 0.0014 - val_loss: 0.0035 - val_acc: 0.0014\nEpoch 108/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.3016e-04 - acc: 1.5805e-04 - val_loss: 0.0024 - val_acc: 0.0014\nEpoch 109/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.8294e-04 - acc: 4.6876e-04 - val_loss: 0.0062 - val_acc: 0.0014\nEpoch 110/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.3083e-04 - acc: 4.4105e-04 - val_loss: 0.0047 - val_acc: 0.0014\nEpoch 111/200\n46/46 [==============================] - 1s 17ms/step - loss: 3.2830e-04 - acc: 3.1758e-04 - val_loss: 0.0021 - val_acc: 0.0014\nEpoch 112/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.1399e-04 - acc: 5.9279e-04 - val_loss: 0.0020 - val_acc: 0.0014\nEpoch 113/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.7055e-04 - acc: 1.0689e-04 - val_loss: 0.0020 - val_acc: 0.0014\nEpoch 114/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.1426e-04 - acc: 1.7602e-04 - val_loss: 0.0051 - val_acc: 0.0014\nEpoch 115/200\n46/46 [==============================] - 1s 19ms/step - loss: 3.0790e-04 - acc: 9.0669e-05 - val_loss: 0.0024 - val_acc: 0.0014\nEpoch 116/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.3267e-04 - acc: 3.1758e-04 - val_loss: 0.0023 - val_acc: 0.0014\nEpoch 117/200\n46/46 [==============================] - 1s 19ms/step - loss: 3.1188e-04 - acc: 8.8836e-04 - val_loss: 0.0021 - val_acc: 0.0014\nEpoch 118/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.4161e-04 - acc: 2.5319e-04 - val_loss: 0.0025 - val_acc: 0.0014\nEpoch 119/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.6050e-04 - acc: 8.8836e-04 - val_loss: 0.0027 - val_acc: 0.0014\nEpoch 120/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.5114e-04 - acc: 7.8972e-04 - val_loss: 0.0019 - val_acc: 0.0014\nEpoch 121/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.6279e-04 - acc: 0.0017 - val_loss: 0.0038 - val_acc: 0.0014\nEpoch 122/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.8946e-04 - acc: 5.9376e-05 - val_loss: 0.0019 - val_acc: 0.0014\nEpoch 123/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.2052e-04 - acc: 3.8888e-04 - val_loss: 0.0052 - val_acc: 0.0014\nEpoch 124/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.4244e-04 - acc: 4.4105e-04 - val_loss: 0.0032 - val_acc: 0.0014\nEpoch 125/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.2955e-04 - acc: 0.0030 - val_loss: 0.0046 - val_acc: 0.0014\nEpoch 126/200\n46/46 [==============================] - 1s 17ms/step - loss: 2.8412e-04 - acc: 6.2779e-04 - val_loss: 0.0015 - val_acc: 0.0014\nEpoch 127/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.9379e-04 - acc: 7.0384e-04 - val_loss: 0.0012 - val_acc: 0.0014\nEpoch 128/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.8996e-04 - acc: 4.4265e-05 - val_loss: 0.0026 - val_acc: 0.0014\nEpoch 129/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.0933e-04 - acc: 9.0669e-05 - val_loss: 0.0019 - val_acc: 0.0014\nEpoch 130/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.1541e-04 - acc: 0.0030 - val_loss: 0.0024 - val_acc: 0.0014\nEpoch 131/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.3784e-04 - acc: 2.9542e-04 - val_loss: 0.0041 - val_acc: 0.0014\nEpoch 132/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.9964e-04 - acc: 0.0010 - val_loss: 0.0026 - val_acc: 0.0014\nEpoch 133/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.0932e-04 - acc: 3.4051e-04 - val_loss: 0.0034 - val_acc: 0.0014\nEpoch 134/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.7340e-04 - acc: 7.4539e-04 - val_loss: 0.0037 - val_acc: 0.0014\nEpoch 135/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.4113e-04 - acc: 8.3721e-04 - val_loss: 0.0039 - val_acc: 0.0014\nEpoch 136/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.5460e-04 - acc: 9.4377e-04 - val_loss: 0.0054 - val_acc: 0.0014\nEpoch 137/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.1767e-04 - acc: 0.0016 - val_loss: 0.0056 - val_acc: 0.0014\nEpoch 138/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.2176e-04 - acc: 5.9376e-05 - val_loss: 0.0036 - val_acc: 0.0014\nEpoch 139/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.1854e-04 - acc: 4.4105e-04 - val_loss: 0.0017 - val_acc: 0.0014\nEpoch 140/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.4387e-04 - acc: 4.6876e-04 - val_loss: 0.0045 - val_acc: 0.0014\nEpoch 141/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.4771e-04 - acc: 6.6473e-04 - val_loss: 0.0043 - val_acc: 0.0014\nEpoch 142/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.3399e-04 - acc: 4.1446e-04 - val_loss: 0.0023 - val_acc: 0.0014\nEpoch 143/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.2157e-04 - acc: 2.9542e-04 - val_loss: 0.0051 - val_acc: 0.0014\nEpoch 144/200\n46/46 [==============================] - 1s 18ms/step - loss: 4.9260e-04 - acc: 5.9376e-05 - val_loss: 0.0030 - val_acc: 0.0014\nEpoch 145/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.8149e-04 - acc: 0.0011 - val_loss: 0.0036 - val_acc: 0.0014\nEpoch 146/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.0293e-04 - acc: 0.0013 - val_loss: 0.0035 - val_acc: 0.0014\nEpoch 147/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.1590e-04 - acc: 6.2779e-04 - val_loss: 0.0022 - val_acc: 0.0014\nEpoch 148/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.9494e-04 - acc: 2.7397e-04 - val_loss: 0.0019 - val_acc: 0.0014\nEpoch 149/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.6176e-04 - acc: 0.0012 - val_loss: 0.0025 - val_acc: 0.0014\nEpoch 150/200\n46/46 [==============================] - 1s 19ms/step - loss: 2.7383e-04 - acc: 9.4377e-04 - val_loss: 0.0021 - val_acc: 0.0014\nEpoch 151/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.9928e-04 - acc: 3.1758e-04 - val_loss: 0.0062 - val_acc: 0.0014\nEpoch 152/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.2146e-04 - acc: 2.1349e-04 - val_loss: 0.0065 - val_acc: 0.0014\nEpoch 153/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.8169e-04 - acc: 0.0013 - val_loss: 0.0044 - val_acc: 0.0014\nEpoch 154/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.4229e-04 - acc: 4.6876e-04 - val_loss: 0.0030 - val_acc: 0.0014\nEpoch 155/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.3004e-04 - acc: 8.8836e-04 - val_loss: 0.0039 - val_acc: 0.0014\nEpoch 156/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.8395e-04 - acc: 1.5805e-04 - val_loss: 0.0052 - val_acc: 0.0014\nEpoch 157/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.7091e-04 - acc: 4.1446e-04 - val_loss: 0.0072 - val_acc: 0.0014\nEpoch 158/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.2638e-04 - acc: 1.5805e-04 - val_loss: 0.0031 - val_acc: 0.0014\nEpoch 159/200\n46/46 [==============================] - 1s 19ms/step - loss: 2.8503e-04 - acc: 8.3721e-04 - val_loss: 0.0046 - val_acc: 0.0014\nEpoch 160/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.7514e-04 - acc: 4.4265e-05 - val_loss: 0.0027 - val_acc: 0.0014\nEpoch 161/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.1894e-04 - acc: 1.7602e-04 - val_loss: 0.0032 - val_acc: 0.0014\nEpoch 162/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.1113e-04 - acc: 0.0011 - val_loss: 0.0039 - val_acc: 0.0014\nEpoch 163/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.6806e-04 - acc: 1.7602e-04 - val_loss: 0.0023 - val_acc: 0.0014\nEpoch 164/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.0469e-04 - acc: 0.0017 - val_loss: 0.0034 - val_acc: 0.0014\nEpoch 165/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.6035e-04 - acc: 3.1758e-04 - val_loss: 0.0041 - val_acc: 0.0014\nEpoch 166/200\n46/46 [==============================] - 1s 19ms/step - loss: 2.6025e-04 - acc: 0.0010 - val_loss: 0.0022 - val_acc: 0.0014\nEpoch 167/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.0815e-04 - acc: 0.0014 - val_loss: 0.0045 - val_acc: 0.0014\nEpoch 168/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.1344e-04 - acc: 1.0689e-04 - val_loss: 0.0041 - val_acc: 0.0014\nEpoch 169/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.8675e-04 - acc: 2.3305e-04 - val_loss: 0.0047 - val_acc: 0.0014\nEpoch 170/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.1614e-04 - acc: 6.2779e-04 - val_loss: 0.0026 - val_acc: 0.0014\nEpoch 171/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.9787e-04 - acc: 9.4377e-04 - val_loss: 0.0028 - val_acc: 0.0014\nEpoch 172/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.0977e-04 - acc: 3.8888e-04 - val_loss: 0.0047 - val_acc: 0.0014\nEpoch 173/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.6167e-04 - acc: 7.0384e-04 - val_loss: 0.0028 - val_acc: 0.0014\nEpoch 174/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.7698e-04 - acc: 2.1349e-04 - val_loss: 0.0042 - val_acc: 0.0014\nEpoch 175/200\n46/46 [==============================] - 1s 19ms/step - loss: 2.9524e-04 - acc: 1.2351e-04 - val_loss: 0.0044 - val_acc: 0.0014\nEpoch 176/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.7969e-04 - acc: 7.8972e-04 - val_loss: 0.0034 - val_acc: 0.0014\nEpoch 177/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.2410e-04 - acc: 0.0011 - val_loss: 0.0021 - val_acc: 0.0014\nEpoch 178/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.1939e-04 - acc: 1.2351e-04 - val_loss: 0.0024 - val_acc: 0.0014\nEpoch 179/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.7736e-04 - acc: 1.4056e-04 - val_loss: 0.0043 - val_acc: 0.0014\nEpoch 180/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.8461e-04 - acc: 0.0012 - val_loss: 0.0017 - val_acc: 0.0014\nEpoch 181/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.8303e-04 - acc: 4.4105e-04 - val_loss: 0.0060 - val_acc: 0.0014\nEpoch 182/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.3489e-04 - acc: 3.8888e-04 - val_loss: 0.0039 - val_acc: 0.0014\nEpoch 183/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.7368e-04 - acc: 9.4377e-04 - val_loss: 0.0023 - val_acc: 0.0014\nEpoch 184/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.9314e-04 - acc: 1.2351e-04 - val_loss: 0.0040 - val_acc: 0.0014\nEpoch 185/200\n46/46 [==============================] - 1s 19ms/step - loss: 3.6763e-04 - acc: 6.2779e-04 - val_loss: 0.0034 - val_acc: 0.0014\nEpoch 186/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.7664e-04 - acc: 4.4265e-05 - val_loss: 0.0041 - val_acc: 0.0014\nEpoch 187/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.6118e-04 - acc: 1.7602e-04 - val_loss: 0.0030 - val_acc: 0.0014\nEpoch 188/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.3977e-04 - acc: 3.6426e-04 - val_loss: 0.0048 - val_acc: 0.0014\nEpoch 189/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.3635e-04 - acc: 0.0023 - val_loss: 0.0031 - val_acc: 0.0014\nEpoch 190/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.8919e-04 - acc: 1.0689e-04 - val_loss: 0.0042 - val_acc: 0.0014\nEpoch 191/200\n46/46 [==============================] - 1s 19ms/step - loss: 3.7437e-04 - acc: 4.6876e-04 - val_loss: 0.0054 - val_acc: 0.0014\nEpoch 192/200\n46/46 [==============================] - 1s 17ms/step - loss: 2.9172e-04 - acc: 4.6876e-04 - val_loss: 0.0045 - val_acc: 0.0014\nEpoch 193/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.1524e-04 - acc: 2.5319e-04 - val_loss: 0.0034 - val_acc: 0.0014\nEpoch 194/200\n46/46 [==============================] - 1s 20ms/step - loss: 3.1814e-04 - acc: 8.3721e-04 - val_loss: 0.0031 - val_acc: 0.0014\nEpoch 195/200\n46/46 [==============================] - 1s 18ms/step - loss: 2.4157e-04 - acc: 1.7602e-04 - val_loss: 0.0028 - val_acc: 0.0014\nEpoch 196/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.2499e-04 - acc: 1.9449e-04 - val_loss: 0.0035 - val_acc: 0.0014\nEpoch 197/200\n46/46 [==============================] - 1s 18ms/step - loss: 3.0323e-04 - acc: 0.0023 - val_loss: 0.0057 - val_acc: 0.0014\nEpoch 198/200\n46/46 [==============================] - 1s 24ms/step - loss: 2.5472e-04 - acc: 0.0010 - val_loss: 0.0050 - val_acc: 0.0014\nEpoch 199/200\n46/46 [==============================] - 1s 19ms/step - loss: 2.7585e-04 - acc: 9.0669e-05 - val_loss: 0.0063 - val_acc: 0.0014\nEpoch 200/200\n46/46 [==============================] - 1s 19ms/step - loss: 3.1061e-04 - acc: 4.4265e-05 - val_loss: 0.0061 - val_acc: 0.0014\n"
],
[
"x=Goog.iloc[1758:]",
"_____no_output_____"
],
[
"# demonstrate prediction\nx_input = array([0.91975247, 0.92645011, 0.94698485, 0.94263605, 0.94388409])\nx_input = x_input.reshape((1, n_steps, n_features))\nyhat = regressor.predict(x_input, verbose=0)\nprint(yhat)",
"[[0.7772312]]\n"
]
],
[
[
"#Stacked LSTM",
"_____no_output_____"
]
],
[
[
"df1=df.reset_index()['AAPL']",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nplt.plot(df1)",
"_____no_output_____"
],
[
"from sklearn.preprocessing import MinMaxScaler\nscaler=MinMaxScaler(feature_range=(0,1))\ndf1=scaler.fit_transform(np.array(df1).reshape(-1,1))",
"_____no_output_____"
],
[
"print(df1)",
"[[0.01102638]\n [0.01046185]\n [0.01220906]\n ...\n [0.97208751]\n [0.98824476]\n [0.95470465]]\n"
],
[
"##splitting dataset into train and test split\ntraining_size=int(len(df1)*0.65)\ntest_size=len(df1)-training_size\ntrain_data,test_data=df1[0:training_size,:],df1[training_size:len(df1),:1]",
"_____no_output_____"
],
[
"training_size,test_size",
"_____no_output_____"
],
[
"import numpy\n# convert an array of values into a dataset matrix\ndef create_dataset(dataset, time_step=1):\n\tdataX, dataY = [], []\n\tfor i in range(len(dataset)-time_step-1):\n\t\ta = dataset[i:(i+time_step), 0] ###i=0, 0,1,2,3-----99 100 \n\t\tdataX.append(a)\n\t\tdataY.append(dataset[i + time_step, 0])\n\treturn numpy.array(dataX), numpy.array(dataY)",
"_____no_output_____"
],
[
"# reshape into X=t,t+1,t+2,t+3 and Y=t+4\ntime_step = 100\nX_train, y_train = create_dataset(train_data, time_step)\nX_test, ytest = create_dataset(test_data, time_step)",
"_____no_output_____"
],
[
"print(X_train.shape), print(y_train.shape)",
"(1302, 100)\n(1302,)\n"
],
[
"print(X_test.shape), print(ytest.shape)",
"(655, 100)\n(655,)\n"
],
[
"# reshape input to be [samples, time steps, features] which is required for LSTM\nX_train =X_train.reshape(X_train.shape[0],X_train.shape[1] , 1)\nX_test = X_test.reshape(X_test.shape[0],X_test.shape[1] , 1)",
"_____no_output_____"
],
[
"model=Sequential()\nmodel.add(LSTM(50,return_sequences=True,input_shape=(100,1)))\nmodel.add(LSTM(50,return_sequences=True))\nmodel.add(LSTM(50))\nmodel.add(Dense(1))\nmodel.compile(loss='mean_squared_error',optimizer='adam')",
"_____no_output_____"
],
[
"model.fit(X_train,y_train,validation_data=(X_test,ytest),epochs=100,batch_size=64,verbose=1)",
"Epoch 1/100\n21/21 [==============================] - 8s 147ms/step - loss: 0.0055 - val_loss: 0.0074\nEpoch 2/100\n21/21 [==============================] - 2s 81ms/step - loss: 7.3898e-04 - val_loss: 0.0015\nEpoch 3/100\n21/21 [==============================] - 2s 80ms/step - loss: 1.7542e-04 - val_loss: 0.0013\nEpoch 4/100\n21/21 [==============================] - 2s 82ms/step - loss: 1.3293e-04 - val_loss: 0.0013\nEpoch 5/100\n21/21 [==============================] - 2s 82ms/step - loss: 1.3069e-04 - val_loss: 0.0013\nEpoch 6/100\n21/21 [==============================] - 2s 81ms/step - loss: 1.2294e-04 - val_loss: 0.0012\nEpoch 7/100\n21/21 [==============================] - 2s 82ms/step - loss: 1.1438e-04 - val_loss: 0.0013\nEpoch 8/100\n21/21 [==============================] - 2s 82ms/step - loss: 1.0705e-04 - val_loss: 0.0012\nEpoch 9/100\n21/21 [==============================] - 2s 82ms/step - loss: 1.0138e-04 - val_loss: 0.0011\nEpoch 10/100\n21/21 [==============================] - 2s 81ms/step - loss: 1.0170e-04 - val_loss: 0.0010\nEpoch 11/100\n21/21 [==============================] - 2s 81ms/step - loss: 9.7016e-05 - val_loss: 9.6234e-04\nEpoch 12/100\n21/21 [==============================] - 2s 81ms/step - loss: 1.0561e-04 - val_loss: 0.0014\nEpoch 13/100\n21/21 [==============================] - 2s 82ms/step - loss: 1.0072e-04 - val_loss: 8.7633e-04\nEpoch 14/100\n21/21 [==============================] - 2s 82ms/step - loss: 9.3635e-05 - val_loss: 9.4732e-04\nEpoch 15/100\n21/21 [==============================] - 2s 82ms/step - loss: 8.0438e-05 - val_loss: 8.8981e-04\nEpoch 16/100\n21/21 [==============================] - 2s 82ms/step - loss: 8.6290e-05 - val_loss: 0.0010\nEpoch 17/100\n21/21 [==============================] - 2s 81ms/step - loss: 8.2555e-05 - val_loss: 9.5223e-04\nEpoch 18/100\n21/21 [==============================] - 2s 83ms/step - loss: 8.2828e-05 - val_loss: 9.8514e-04\nEpoch 19/100\n21/21 [==============================] - 2s 83ms/step - loss: 7.7955e-05 - val_loss: 0.0016\nEpoch 20/100\n21/21 [==============================] - 2s 82ms/step - loss: 7.6072e-05 - val_loss: 0.0010\nEpoch 21/100\n21/21 [==============================] - 2s 82ms/step - loss: 7.7914e-05 - val_loss: 0.0012\nEpoch 22/100\n21/21 [==============================] - 2s 81ms/step - loss: 8.3449e-05 - val_loss: 0.0013\nEpoch 23/100\n21/21 [==============================] - 2s 82ms/step - loss: 8.4365e-05 - val_loss: 0.0014\nEpoch 24/100\n21/21 [==============================] - 2s 81ms/step - loss: 7.6370e-05 - val_loss: 9.9972e-04\nEpoch 25/100\n21/21 [==============================] - 2s 82ms/step - loss: 7.5948e-05 - val_loss: 6.5758e-04\nEpoch 26/100\n21/21 [==============================] - 2s 82ms/step - loss: 7.0834e-05 - val_loss: 7.2038e-04\nEpoch 27/100\n21/21 [==============================] - 2s 81ms/step - loss: 7.2306e-05 - val_loss: 7.8592e-04\nEpoch 28/100\n21/21 [==============================] - 2s 81ms/step - loss: 7.0701e-05 - val_loss: 7.8792e-04\nEpoch 29/100\n21/21 [==============================] - 2s 82ms/step - loss: 6.7346e-05 - val_loss: 6.4847e-04\nEpoch 30/100\n21/21 [==============================] - 2s 82ms/step - loss: 7.0343e-05 - val_loss: 9.2930e-04\nEpoch 31/100\n21/21 [==============================] - 2s 83ms/step - loss: 6.0994e-05 - val_loss: 7.0833e-04\nEpoch 32/100\n21/21 [==============================] - 2s 82ms/step - loss: 6.3155e-05 - val_loss: 5.9278e-04\nEpoch 33/100\n21/21 [==============================] - 2s 82ms/step - loss: 6.7312e-05 - val_loss: 5.7124e-04\nEpoch 34/100\n21/21 [==============================] - 2s 81ms/step - loss: 5.8641e-05 - val_loss: 9.7415e-04\nEpoch 35/100\n21/21 [==============================] - 2s 81ms/step - loss: 7.6808e-05 - val_loss: 9.3167e-04\nEpoch 36/100\n21/21 [==============================] - 2s 81ms/step - loss: 7.8438e-05 - val_loss: 8.8547e-04\nEpoch 37/100\n21/21 [==============================] - 2s 82ms/step - loss: 5.8995e-05 - val_loss: 5.4396e-04\nEpoch 38/100\n21/21 [==============================] - 2s 81ms/step - loss: 6.0176e-05 - val_loss: 5.3857e-04\nEpoch 39/100\n21/21 [==============================] - 2s 81ms/step - loss: 5.4084e-05 - val_loss: 8.2597e-04\nEpoch 40/100\n21/21 [==============================] - 2s 82ms/step - loss: 5.8274e-05 - val_loss: 5.3620e-04\nEpoch 41/100\n21/21 [==============================] - 2s 81ms/step - loss: 5.9337e-05 - val_loss: 6.0110e-04\nEpoch 42/100\n21/21 [==============================] - 2s 81ms/step - loss: 6.2708e-05 - val_loss: 8.2457e-04\nEpoch 43/100\n21/21 [==============================] - 2s 82ms/step - loss: 5.9641e-05 - val_loss: 5.2786e-04\nEpoch 44/100\n21/21 [==============================] - 2s 82ms/step - loss: 5.5381e-05 - val_loss: 0.0011\nEpoch 45/100\n21/21 [==============================] - 2s 82ms/step - loss: 6.2622e-05 - val_loss: 0.0011\nEpoch 46/100\n21/21 [==============================] - 2s 81ms/step - loss: 6.0674e-05 - val_loss: 7.6918e-04\nEpoch 47/100\n21/21 [==============================] - 2s 82ms/step - loss: 5.1535e-05 - val_loss: 5.0522e-04\nEpoch 48/100\n21/21 [==============================] - 2s 83ms/step - loss: 5.5776e-05 - val_loss: 5.4243e-04\nEpoch 49/100\n21/21 [==============================] - 2s 82ms/step - loss: 5.3812e-05 - val_loss: 6.5790e-04\nEpoch 50/100\n21/21 [==============================] - 2s 82ms/step - loss: 5.0934e-05 - val_loss: 4.7011e-04\nEpoch 51/100\n21/21 [==============================] - 2s 82ms/step - loss: 5.2624e-05 - val_loss: 5.1578e-04\nEpoch 52/100\n21/21 [==============================] - 2s 83ms/step - loss: 6.1894e-05 - val_loss: 4.5966e-04\nEpoch 53/100\n21/21 [==============================] - 2s 82ms/step - loss: 5.8618e-05 - val_loss: 5.1143e-04\nEpoch 54/100\n21/21 [==============================] - 2s 83ms/step - loss: 5.6495e-05 - val_loss: 4.9687e-04\nEpoch 55/100\n21/21 [==============================] - 2s 82ms/step - loss: 4.6151e-05 - val_loss: 5.2262e-04\nEpoch 56/100\n21/21 [==============================] - 2s 81ms/step - loss: 4.5036e-05 - val_loss: 5.6196e-04\nEpoch 57/100\n21/21 [==============================] - 2s 82ms/step - loss: 4.5910e-05 - val_loss: 4.4019e-04\nEpoch 58/100\n21/21 [==============================] - 2s 82ms/step - loss: 4.8520e-05 - val_loss: 5.8916e-04\nEpoch 59/100\n21/21 [==============================] - 2s 83ms/step - loss: 4.5644e-05 - val_loss: 4.1624e-04\nEpoch 60/100\n21/21 [==============================] - 2s 82ms/step - loss: 4.8064e-05 - val_loss: 4.1182e-04\nEpoch 61/100\n21/21 [==============================] - 2s 82ms/step - loss: 4.4789e-05 - val_loss: 4.0218e-04\nEpoch 62/100\n21/21 [==============================] - 2s 82ms/step - loss: 4.3052e-05 - val_loss: 3.9525e-04\nEpoch 63/100\n21/21 [==============================] - 2s 82ms/step - loss: 4.7330e-05 - val_loss: 3.9551e-04\nEpoch 64/100\n21/21 [==============================] - 2s 83ms/step - loss: 4.3588e-05 - val_loss: 4.3704e-04\nEpoch 65/100\n21/21 [==============================] - 2s 82ms/step - loss: 4.5469e-05 - val_loss: 3.7983e-04\nEpoch 66/100\n21/21 [==============================] - 2s 82ms/step - loss: 4.3143e-05 - val_loss: 7.9627e-04\nEpoch 67/100\n21/21 [==============================] - 2s 82ms/step - loss: 4.6773e-05 - val_loss: 3.8149e-04\nEpoch 68/100\n21/21 [==============================] - 2s 82ms/step - loss: 4.1199e-05 - val_loss: 4.4398e-04\nEpoch 69/100\n21/21 [==============================] - 2s 82ms/step - loss: 5.7270e-05 - val_loss: 3.7625e-04\nEpoch 70/100\n21/21 [==============================] - 2s 81ms/step - loss: 4.0803e-05 - val_loss: 3.5456e-04\nEpoch 71/100\n21/21 [==============================] - 2s 82ms/step - loss: 4.0736e-05 - val_loss: 3.5344e-04\nEpoch 72/100\n21/21 [==============================] - 2s 82ms/step - loss: 3.8076e-05 - val_loss: 3.5854e-04\nEpoch 73/100\n21/21 [==============================] - 2s 82ms/step - loss: 3.8906e-05 - val_loss: 6.8496e-04\nEpoch 74/100\n21/21 [==============================] - 2s 80ms/step - loss: 4.0120e-05 - val_loss: 3.4409e-04\nEpoch 75/100\n21/21 [==============================] - 2s 81ms/step - loss: 3.5974e-05 - val_loss: 3.7853e-04\nEpoch 76/100\n21/21 [==============================] - 2s 81ms/step - loss: 4.3123e-05 - val_loss: 4.2999e-04\nEpoch 77/100\n21/21 [==============================] - 2s 82ms/step - loss: 3.9322e-05 - val_loss: 3.6654e-04\nEpoch 78/100\n21/21 [==============================] - 2s 82ms/step - loss: 3.9484e-05 - val_loss: 3.2581e-04\nEpoch 79/100\n21/21 [==============================] - 2s 82ms/step - loss: 4.2339e-05 - val_loss: 3.1839e-04\nEpoch 80/100\n21/21 [==============================] - 2s 82ms/step - loss: 3.6234e-05 - val_loss: 3.1996e-04\nEpoch 81/100\n21/21 [==============================] - 2s 82ms/step - loss: 3.7255e-05 - val_loss: 3.3661e-04\nEpoch 82/100\n21/21 [==============================] - 2s 83ms/step - loss: 3.4999e-05 - val_loss: 3.3089e-04\nEpoch 83/100\n21/21 [==============================] - 2s 82ms/step - loss: 3.2795e-05 - val_loss: 3.4448e-04\nEpoch 84/100\n21/21 [==============================] - 2s 81ms/step - loss: 3.5872e-05 - val_loss: 3.6146e-04\nEpoch 85/100\n21/21 [==============================] - 2s 82ms/step - loss: 3.4084e-05 - val_loss: 3.1687e-04\nEpoch 86/100\n21/21 [==============================] - 2s 81ms/step - loss: 3.2673e-05 - val_loss: 3.1422e-04\nEpoch 87/100\n21/21 [==============================] - 2s 82ms/step - loss: 3.1982e-05 - val_loss: 2.9620e-04\nEpoch 88/100\n21/21 [==============================] - 2s 82ms/step - loss: 3.7375e-05 - val_loss: 3.2006e-04\nEpoch 89/100\n21/21 [==============================] - 2s 82ms/step - loss: 3.8632e-05 - val_loss: 5.1199e-04\nEpoch 90/100\n21/21 [==============================] - 2s 82ms/step - loss: 3.3239e-05 - val_loss: 2.8144e-04\nEpoch 91/100\n21/21 [==============================] - 2s 81ms/step - loss: 3.8332e-05 - val_loss: 4.0891e-04\nEpoch 92/100\n21/21 [==============================] - 2s 81ms/step - loss: 3.9140e-05 - val_loss: 3.3245e-04\nEpoch 93/100\n21/21 [==============================] - 2s 82ms/step - loss: 3.7095e-05 - val_loss: 2.7598e-04\nEpoch 94/100\n21/21 [==============================] - 2s 81ms/step - loss: 3.5018e-05 - val_loss: 3.1787e-04\nEpoch 95/100\n21/21 [==============================] - 2s 82ms/step - loss: 4.0992e-05 - val_loss: 3.1871e-04\nEpoch 96/100\n21/21 [==============================] - 2s 82ms/step - loss: 3.2719e-05 - val_loss: 2.8855e-04\nEpoch 97/100\n21/21 [==============================] - 2s 82ms/step - loss: 2.8539e-05 - val_loss: 2.8535e-04\nEpoch 98/100\n21/21 [==============================] - 2s 82ms/step - loss: 3.1880e-05 - val_loss: 2.6324e-04\nEpoch 99/100\n21/21 [==============================] - 2s 82ms/step - loss: 3.7977e-05 - val_loss: 5.1695e-04\nEpoch 100/100\n21/21 [==============================] - 2s 82ms/step - loss: 4.1232e-05 - val_loss: 2.5732e-04\n"
],
[
"### Lets Do the prediction and check performance metrics\ntrain_predict=model.predict(X_train)\ntest_predict=model.predict(X_test)",
"_____no_output_____"
],
[
"##Transformback to original form\ntrain_predict=scaler.inverse_transform(train_predict)\ntest_predict=scaler.inverse_transform(test_predict)",
"_____no_output_____"
],
[
"### Calculate RMSE performance metrics\nimport math\nfrom sklearn.metrics import mean_squared_error\nmath.sqrt(mean_squared_error(y_train,train_predict))",
"_____no_output_____"
],
[
"### Plotting \n# shift train predictions for plotting\nlook_back=100\ntrainPredictPlot = numpy.empty_like(df1)\ntrainPredictPlot[:, :] = np.nan\ntrainPredictPlot[look_back:len(train_predict)+look_back, :] = train_predict\n# shift test predictions for plotting\ntestPredictPlot = numpy.empty_like(df1)\ntestPredictPlot[:, :] = numpy.nan\ntestPredictPlot[len(train_predict)+(look_back*2)+1:len(df1)-1, :] = test_predict\n# plot baseline and predictions\nplt.plot(scaler.inverse_transform(df1))\nplt.plot(trainPredictPlot)\nplt.plot(testPredictPlot)\nplt.show()",
"_____no_output_____"
],
[
"len(test_data)",
"_____no_output_____"
],
[
"x_input=test_data[756:].reshape(1,-1)\nx_input.shape",
"_____no_output_____"
],
[
"\ntemp_input=list(x_input)\ntemp_input=temp_input[0].tolist()",
"_____no_output_____"
],
[
"# demonstrate prediction for next 10 days\nfrom numpy import array\n\nlst_output=[]\nn_steps=100\ni=0\nwhile(i<30):\n \n if(len(temp_input)>100):\n #print(temp_input)\n x_input=np.array(temp_input[1:])\n print(\"{} day input {}\".format(i,x_input))\n x_input=x_input.reshape(1,-1)\n x_input = x_input.reshape((1, n_steps, 1))\n #print(x_input)\n yhat = model.predict(x_input, verbose=0)\n print(\"{} day output {}\".format(i,yhat))\n temp_input.extend(yhat[0].tolist())\n temp_input=temp_input[1:]\n #print(temp_input)\n lst_output.extend(yhat.tolist())\n i=i+1\n else:\n x_input = x_input.reshape((1, n_steps,1))\n yhat = model.predict(x_input, verbose=0)\n print(yhat[0])\n temp_input.extend(yhat[0].tolist())\n print(len(temp_input))\n lst_output.extend(yhat.tolist())\n i=i+1\n \n\nprint(lst_output)",
"0 day input [0.2628683 0.25271373 0.23745685 0.22770248 0.25356411 0.25101295\n 0.25121304 0.25498977 0.25543995 0.21609725 0.23127907 0.23045369\n 0.23750689 0.24390976 0.24513533 0.2413586 0.23563104 0.24330951\n 0.24798661 0.25028764 0.25268874 0.24388476 0.24543545 0.24238407\n 0.25503976 0.25138814 0.24733629 0.27377321 0.27674955 0.27694965\n 0.28877996 0.29610824 0.29625834 0.28800462 0.28667902 0.28422789\n 0.28787955 0.28610374 0.28765446 0.28670402 0.28797958 0.29073084\n 0.28830474 0.2930819 0.2962333 0.29648343 0.29783402 0.29353208\n 0.29808415 0.30028515 0.29948478 0.29695865 0.29190637 0.29293184\n 0.30791356 0.31294084 0.31494175 0.319994 0.32597169 0.33072385\n 0.32699716 0.331074 0.34840678 0.33830225 0.33252466 0.32764743\n 0.33184935 0.33247463 0.335551 0.33877747 0.3457306 0.3490571\n 0.34990747 0.35318394 0.36093745 0.35943676 0.36223801 0.35806113\n 0.35786104 0.35876144 0.35881148 0.36851586 0.37034167 0.37201742\n 0.37939573 0.37859539 0.37389326 0.37144217 0.37221751 0.36236307\n 0.38699917 0.38357261 0.39007555 0.38189686 0.36784054 0.36794057\n 0.36248813 0.35363413 0.32497125 0.33232457 0.3379771 0.33587616\n 0.33317494 0.31839328 0.32717225 0.31761794 0.30981444 0.30808865\n 0.30623781 0.30411187 0.3064129 0.29833428 0.29390728 0.3097644\n 0.31701765 0.32372069 0.33605122 0.34212897 0.34770647 0.34615579\n 0.34605572 0.34252916 0.34540544 0.35681057 0.35535991 0.35933673\n 0.35763595 0.35713573 0.34960735 0.3601871 0.36003704 0.35548497\n 0.36456407 0.36751538 0.3717173 0.37126707 0.36073735 0.36378873\n 0.36876595 0.36506429 0.36894104 0.3737182 0.37194239 0.36906611\n 0.3748437 0.36716523 0.37874545 0.38279726 0.38237208 0.37824523\n 0.38004605 0.3848982 0.3826472 0.39330198 0.38177179 0.37074186\n 0.34402981 0.35318394 0.35828622 0.36926616 0.36316345 0.36188785\n 0.38312242 0.36756542 0.36503929 0.37694464 0.38657399 0.38659899\n 0.39230155 0.39185136 0.36729029 0.37691964 0.37109201 0.37451854\n 0.38322245 0.38254717 0.37494373 0.38367267 0.39390227 0.39385223\n 0.39612827 0.40245611 0.41968886 0.4184383 0.40758343 0.41045971\n 0.41246061 0.41763796 0.41311093 0.40503227 0.4075084 0.4049072\n 0.41328599 0.41043471 0.40775852 0.4206393 0.42218999 0.40810868\n 0.41276077 0.42824271 0.42836778 0.42171477 0.42829274 0.43594618\n 0.4512531 0.45040268 0.4490271 0.446651 0.44892703 0.45175331\n 0.46200791 0.46063232 0.46868591 0.46968638 0.47718976 0.48336754\n 0.46896103 0.468886 0.48264219 0.50030017 0.50450204 0.50357664\n 0.50385173 0.5093292 0.51110505 0.51625736 0.51565704 0.52193489\n 0.51735787 0.52516137 0.52851286 0.52648696 0.51873345 0.51578215\n 0.51520686 0.52668702 0.52148471 0.53036367 0.52888802 0.52115955\n 0.50937927 0.51510679 0.52471111 0.53754189 0.52806268 0.53196443\n 0.53769195 0.53941773 0.5486469 0.56042717 0.56180284 0.56012705\n 0.56082736 0.55937674 0.57078187 0.57145715 0.58556353 0.58528837\n 0.58959031 0.59491772 0.61167529 0.60437197 0.61029963 0.60677311\n 0.61877848 0.63488573 0.63663648 0.65321895 0.64251414 0.63916264\n 0.64891701 0.65764599 0.65224355 0.65506984 0.65889655 0.65659549\n 0.63318499 0.6550448 0.67167727 0.67050174 0.63458561 0.63245964\n 0.65794611 0.66444906 0.67385324 0.66089743 0.66469911 0.65984692\n 0.67883053 0.67300286 0.673203 0.65832127 0.66987646 0.6615727\n 0.64343954 0.60624782 0.58098643 0.59241659 0.54457005 0.54416986\n 0.60782354 0.58408788 0.61765294 0.59309195 0.58336253 0.52618684\n 0.57413337 0.54934721 0.4813166 0.55570009 0.46625985 0.49289682\n 0.47741485 0.47268772 0.43382025 0.42163974 0.4779401 0.47453857\n 0.50685311 0.48009107 0.49777401 0.49647341 0.46300838 0.47306288\n 0.46425894 0.51693264 0.5093292 0.52593671 0.53073883 0.54389477\n 0.57841028 0.57185734 0.57750991 0.56778049 0.5530989 0.53168927\n 0.55102299 0.54834677 0.56820572 0.56870597 0.55722574 0.5801111\n 0.59529287 0.5834626 0.59369219 0.60469713 0.6123756 0.62015407\n 0.6361363 0.6483418 0.63933774 0.62993348 0.63466064 0.63008354\n 0.6482167 0.64366471 0.65889655 0.65294386 0.65804618 0.65264374\n 0.65609523 0.65644542 0.65567008 0.66544949 0.66917614 0.67362814\n 0.66662502 0.68958534 0.69448752 0.72082437 0.74295935 0.70059028\n 0.70784353 0.71832325 0.74105847 0.73983294 0.74018314 0.73515585\n 0.75804113 0.77719977 0.76101748 0.77297286 0.74493526 0.76531942\n 0.77287279 0.77114701 0.77114701 0.79550802 0.79260671 0.81431646\n 0.81841834 0.82009406 0.81566709 0.83147422 0.83815218 0.82612178\n 0.8241709 0.84448003 0.83089894 0.83362515 0.78933024 0.78702917\n 0.80898905 0.79340709 0.81129012 0.82279531 0.92354065 0.95032768\n 0.95760597 0.96158275 1. 0.97208751 0.98824476 0.95470465]\n"
],
[
"",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c15cf9cce03f9a53800bb0d9dcf2d3cc028f76 | 26,368 | ipynb | Jupyter Notebook | notebooks/baseline_model.ipynb | Navjotbians/Toxic-comment-classifier | 6bfe4e7513e925a0207ce9d9b8171069c9f4e210 | [
"MIT"
] | 1 | 2021-11-25T03:04:25.000Z | 2021-11-25T03:04:25.000Z | notebooks/baseline_model.ipynb | Navjotbians/Toxic-comment-classifier | 6bfe4e7513e925a0207ce9d9b8171069c9f4e210 | [
"MIT"
] | 5 | 2021-02-23T14:10:55.000Z | 2021-04-28T18:41:21.000Z | notebooks/baseline_model.ipynb | Navjotbians/Toxic-comment-classifier | 6bfe4e7513e925a0207ce9d9b8171069c9f4e210 | [
"MIT"
] | 1 | 2021-12-15T03:16:54.000Z | 2021-12-15T03:16:54.000Z | 26.420842 | 639 | 0.558594 | [
[
[
"# Baseline model classification",
"_____no_output_____"
],
[
"The purpose of this notebook is to make predictions for all six categories on the given dataset using some set of rules.\n<br>Let's assume that human labellers have labelled these comments based on the certain kind of words present in the comments. So it is worth exploring the comments to check the kind of words used under every category and how many times that word occurred in that category. So in this notebook, six datasets are created from the main dataset, to make the analysis easy for each category. After this, counting and storing the most frequently used words under each category is done. For each category, then we are checking the presence of `top n` words from the frequently used word dictionary, in the comments, to make the prediction.",
"_____no_output_____"
],
[
"### 1. Import libraries and load data",
"_____no_output_____"
],
[
"For preparation lets import the required libraries and the data",
"_____no_output_____"
]
],
[
[
"import os\ndir_path = os.path.dirname(os.getcwd())",
"_____no_output_____"
],
[
"import numpy as np\nimport pandas as pd\nimport nltk\nfrom nltk.tokenize import word_tokenize\nfrom nltk.corpus import stopwords\nimport re\nimport string\nimport operator\nimport pickle\nimport sys \nsys.path.append(os.path.join(dir_path, \"src\"))\nfrom clean_comments import clean\n",
"_____no_output_____"
],
[
"train_path = os.path.join(dir_path, 'data', 'raw', 'train.csv')",
"_____no_output_____"
],
[
"## Load dataset\ndf = pd.read_csv(train_path)",
"_____no_output_____"
]
],
[
[
"### <br>2. Datasets for each category",
"_____no_output_____"
],
[
"Dataset with toxic comments",
"_____no_output_____"
]
],
[
[
"#extract dataset with toxic label\ndf_toxic = df[df['toxic'] == 1]\n#Reseting the index\ndf_toxic.set_index(['id'], inplace = True)\ndf_toxic.reset_index(level =['id'], inplace = True)",
"_____no_output_____"
]
],
[
[
"Dataset of severe toxic comments",
"_____no_output_____"
]
],
[
[
"#extract dataset with Severe toxic label\ndf_severe_toxic = df[df['severe_toxic'] == 1]\n#Reseting the index\ndf_severe_toxic.set_index(['id'], inplace = True)\ndf_severe_toxic.reset_index(level =['id'], inplace = True)",
"_____no_output_____"
]
],
[
[
"Dataset with obscene comment ",
"_____no_output_____"
]
],
[
[
"#extract dataset with obscens label\ndf_obscene = df[df['obscene'] == 1]\n#Reseting the index\ndf_obscene.set_index(['id'], inplace = True)\ndf_obscene.reset_index(level =['id'], inplace = True)\n#df_obscene =df_obscene.drop('comment_text', axis=1)",
"_____no_output_____"
]
],
[
[
"Dataset with comments labeled as \"identity_hate\" ",
"_____no_output_____"
]
],
[
[
"df_identity_hate = df[df['identity_hate'] == 1]\n#Reseting the index\ndf_identity_hate.set_index(['id'], inplace = True)\ndf_identity_hate.reset_index(level =['id'], inplace = True)",
"_____no_output_____"
]
],
[
[
"Dataset with all the threat comments",
"_____no_output_____"
]
],
[
[
"df_threat = df[df['threat'] == 1]\n#Reseting the index\ndf_threat.set_index(['id'], inplace = True)\ndf_threat.reset_index(level =['id'], inplace = True)",
"_____no_output_____"
]
],
[
[
"Dataset of comments with \"Insult\" label",
"_____no_output_____"
]
],
[
[
"df_insult = df[df['insult'] == 1]\n#Reseting the index\ndf_insult.set_index(['id'], inplace = True)\ndf_insult.reset_index(level =['id'], inplace = True)",
"_____no_output_____"
]
],
[
[
"Dataset with comments which have all six labels",
"_____no_output_____"
]
],
[
[
"df_6 = df[(df['toxic']==1) & (df['severe_toxic']==1) &\n (df['obscene']==1) & (df['threat']==1)& \n (df['insult']==1)& (df['identity_hate']==1)]",
"_____no_output_____"
],
[
"df_6.set_index(['id'], inplace = True)\ndf_6.reset_index(level =['id'], inplace = True) \n# df6 = df_6.drop('comment_text', axis=1)",
"_____no_output_____"
]
],
[
[
"### <br> 3. Preperation of vocab",
"_____no_output_____"
]
],
[
[
"### frequent_words function take dataset as an input and returns two arguments - \n### all_words and counts.\n### all_words gives all the words occuring in the provided dataset\n### counts gives dictionary with keys as a words those exists in the entire dataset and values\n### as a count of existance of these words in the dataset.\n\ndef frequent_words(data):\n all_word = []\n counts = dict()\n for i in range (0,len(data)):\n\n ### Load input\n input_str = data.comment_text[i]\n\n ### Clean input data\n processed_text = clean(input_str)\n\n ### perform tokenization\n tokened_text = word_tokenize(processed_text)\n\n ### remove stop words\n comment_word = []\n for word in tokened_text:\n if word not in stopwords.words('english'):\n comment_word.append(word)\n #print(len(comment_word))\n all_word.extend(comment_word)\n \n for word in all_word:\n if word in counts:\n counts[word] += 1\n else:\n counts[word] = 1\n \n return all_word, counts",
"_____no_output_____"
],
[
"## descend_order_dict funtion takes dataframe as an input and outputs sorted vocab dictionary\n## with the values sorted in descending order (keys are words and values are word count)\n\ndef descend_order_dict(data):\n all_words, word_count = frequent_words(data)\n sorted_dict = dict( sorted(word_count.items(), key=operator.itemgetter(1),reverse=True))\n return sorted_dict",
"_____no_output_____"
],
[
"label_sequence = df.columns.drop(\"id\")\nlabel_sequence = label_sequence.drop(\"comment_text\").tolist()\nlabel_sequence",
"_____no_output_____"
]
],
[
[
"#### <br>Getting the vocab used in each category in descending order its count ",
"_____no_output_____"
],
[
"For **`toxic`** category",
"_____no_output_____"
]
],
[
[
"descend_order_toxic_dict = descend_order_dict(df_toxic)",
"_____no_output_____"
]
],
[
[
"These are the words most frequently used in toxic comments",
"_____no_output_____"
],
[
"<br>For **`severe_toxic`** category",
"_____no_output_____"
]
],
[
[
"descend_order_severe_toxic_dict =descend_order_dict(df_severe_toxic)",
"_____no_output_____"
]
],
[
[
"These are the words most frequently used in severe toxic comments",
"_____no_output_____"
],
[
"<br>For **`obscene`** category",
"_____no_output_____"
]
],
[
[
"descend_order_obscene_dict = descend_order_dict(df_obscene)",
"_____no_output_____"
]
],
[
[
"These are the words most frequently used in obscene comments",
"_____no_output_____"
],
[
"<br>For **`threat`** category",
"_____no_output_____"
]
],
[
[
"descend_order_threat_dict = descend_order_dict(df_threat)",
"_____no_output_____"
]
],
[
[
"These are the words most frequently used in severe threat comments",
"_____no_output_____"
],
[
"<br>For **`insult`** category",
"_____no_output_____"
]
],
[
[
"descend_order_insult_dict = descend_order_dict(df_insult)",
"_____no_output_____"
]
],
[
[
"These are the words most frequently used in comments labeled as an insult",
"_____no_output_____"
],
[
"<br>For **`identity_hate`** category",
"_____no_output_____"
]
],
[
[
"descend_order_id_hate_dict = descend_order_dict(df_identity_hate)",
"_____no_output_____"
]
],
[
[
"These are the most frequently used words in the comments labeled as identity_hate",
"_____no_output_____"
],
[
"<br> For comments when all categories are 1",
"_____no_output_____"
]
],
[
[
"descend_order_all_label_dict = descend_order_dict(df_6)",
"_____no_output_____"
]
],
[
[
"These are the most frequently used words in the comments labeled as identity_hate",
"_____no_output_____"
],
[
"#### <br> Picking up the top n words from the descend vocab dictionary",
"_____no_output_____"
],
[
"In this code, top 3 words are considered to make the prediction.",
"_____no_output_____"
]
],
[
[
"# list(descend_order_all_label_dict.keys())[3]",
"_____no_output_____"
],
[
"## combining descend vocab dictionaries of all the categories in one dictionary \n## with categories as their keys\n\nall_label_descend_vocab = {'toxic':descend_order_toxic_dict,\n 'severe_toxic':descend_order_severe_toxic_dict,\n 'obscene':descend_order_obscene_dict,\n 'threat':descend_order_threat_dict,\n 'insult':descend_order_insult_dict,\n 'id_hate':descend_order_id_hate_dict\n }",
"_____no_output_____"
],
[
"## this function takes two arguments - all_label_freq_word and top n picks\n## and outputs a dictionary with categories as keys and list of top 3 words as their values.\n\ndef dict_top_n_words(all_label_descend_vocab, n):\n count = dict()\n for label, words in all_label_descend_vocab.items():\n word_list = []\n for i in range (0,n):\n word_list.append(list(words.keys())[i])\n count[label] = word_list\n return count\n",
"_____no_output_____"
],
[
"### top 3 words from all the vocabs\ndict_top_n_words(all_label_descend_vocab,3)",
"_____no_output_____"
]
],
[
[
"### <br>4. Performance check of baseline Model",
"_____no_output_____"
]
],
[
[
"## Check if the any word from the top 3 words from the six categories exist in the comments\ndef word_intersection(input_str, n, all_words =all_label_descend_vocab):\n toxic_pred = []\n severe_toxic_pred = []\n obscene_pred = []\n threat_pred = []\n insult_pred = []\n id_hate_pred = []\n rule_based_pred = [toxic_pred, severe_toxic_pred, obscene_pred, threat_pred, \n insult_pred,id_hate_pred ]\n # top_n_words = dict_top_n_words[all_label_freq_word,n]\n \n for count,ele in enumerate(list(dict_top_n_words(all_label_descend_vocab,3).values())):\n\n for word in ele:\n if (word in input_str):\n rule_based_pred[count].append(word)\n #print(rule_based_pred)\n for i in range (0,len(rule_based_pred)):\n if len(rule_based_pred[i])== 0:\n rule_based_pred[i]= 0\n else:\n rule_based_pred[i]= 1\n return rule_based_pred\n",
"_____no_output_____"
],
[
"### Test\nword_intersection(df['comment_text'][55], 3)",
"_____no_output_____"
]
],
[
[
"<br>Uncomment the below cell to get the prediction on the dataset but it is already saved in `rule_base_pred.pkl` in list form to save time",
"_____no_output_____"
]
],
[
[
"## store the values of predictions by running the word_intersection function on \n## all the comments\n\n# rule_base_pred = df['comment_text'].apply(lambda x: word_intersection(x,3))",
"_____no_output_____"
]
],
[
[
"After running above cell, we get the predictions on the entire dataset for each category in `rule_base_pred`, the orginal type of `rule_base_pred` is pandas.core.series.Series. This pandas series is converted into list and saved for future use. This `.pkl` fine can be loaded by running below cell.",
"_____no_output_____"
]
],
[
[
"### save rule_base_pred\n# file_name = \"rule_base_pred.pkl\"\n\n# open_file = open(file_name, \"wb\")\n# pickle.dump(rule_base_pred, open_file)\n# # open_file.close()\n# open_file = open(\"rule_base_pred.pkl\", \"rb\")\n# pred_rule = pickle.load(open_file)\n# open_file.close()",
"_____no_output_____"
],
[
"### Open the saved rule_base_pred.pkl\npkl_file = os.path.join(dir_path, 'model', 'rule_base_pred.pkl')\nopen_file = open(pkl_file, \"rb\")\npred_rule = pickle.load(open_file)\nopen_file.close()",
"_____no_output_____"
],
[
"## true prediction \ny_true = df.drop(['id', 'comment_text'], axis=1)",
"_____no_output_____"
],
[
"## check the type \ntype(y_true), type(pred_rule)",
"_____no_output_____"
]
],
[
[
"<br>Uncomment pred_rule line in below cell to convert the type of predictions from panda series to list,if not using saved `rule_base_pred.pkl`",
"_____no_output_____"
]
],
[
[
"### Change the type to list\npred_true = y_true.values.tolist()\n# pred_rule = rule_base_pred.values.tolist() ",
"_____no_output_____"
]
],
[
[
"#### Compute accuracy of Baseline Model",
"_____no_output_____"
]
],
[
[
"## Accuracy check for decent and not-decent comments classification\ncount = 0\nfor i in range(0, len(df)):\n if pred_true[i] == pred_rule[i]:\n count = count+1\nprint(\"Overall accuracy of rule based classifier : {}\".format((count/len(df))*100))",
"Overall accuracy of rule based classifier : 76.5615306039318\n"
]
],
[
[
"Based on the rule implimented here, baseline classifier is classifying decent and not-decent comments with the **accuracy of 76.6%**.Now we have to see if AI based models giver better performance than this.",
"_____no_output_____"
]
],
[
[
"## Category wise accuracy check\nmean = []\nfor j in range(0, len(pred_true[0])):\n count = 0\n for i in range(0, len(df)):\n if pred_true[i][j] == pred_rule[i][j]:\n count = count+1\n mean.append(count/len(df)*100)\n print(\"Accuracy of rule based classifier in predicting {} comments : {}\".format(label_sequence[j],(count/len(df))*100))\nprint(\"Mean accuracy : {}\".format(np.array(mean).mean()))",
"Accuracy of rule based classifier in predicting toxic comments : 89.4554774990443\nAccuracy of rule based classifier in predicting severe_toxic comments : 88.22906417832814\nAccuracy of rule based classifier in predicting obscene comments : 96.3282802012897\nAccuracy of rule based classifier in predicting threat comments : 87.83801567954077\nAccuracy of rule based classifier in predicting insult comments : 95.77930827029975\nAccuracy of rule based classifier in predicting identity_hate comments : 98.28916281780525\nMean accuracy : 92.65321810771799\n"
]
],
[
[
"Mean accuracy of our *rule-based-model* is 92.7%<br>\nMinimum accuracy for predicting `toxic `, `severe_toxic `, `obscene `, `threat `, `insult `, or `identity_hate ` class of the Baseline model is more that 88%.\n<br>Accuracies for:\n<ol>\n<li>`toxic `: 89.4%</li>\n<li>`severe_toxic `: 88.2%</li>\n<li>`obscene `: 96.3%</li>\n<li>`threat `: 87.8%</li>\n<li>`insult `: 95.8%</li>\n<li>`identity_hate `: 98.3%</li>\n</ol>\n<br>In my opinion this model is doing quite good. As we know the dataset have more samples for toxic comments as compared to rest of the categories but this model still managed to predict with 89.4% of accuracy by just considering the top 3 words from its very large vocabulary. It may perform better if we consider more than 3 words from its vocab, because top 3 words not necessarily a true representaion of this category.\n<br>On the other hand, `obscene `, `insult `, and `identity_hate ` have very good accuracy rates, seems like human labellers looked for these top 3 words to label comments under these categories.\n<br>For `threat ` category, the model should perform well as the number of sample for this category is just 478, that means it has smaller vocab comparative to other classes. but seems like human labellers looked at more than these top 3 words of its vocab. It could be checked by tweaking the number of top n words.\n",
"_____no_output_____"
]
],
[
[
"yp=np.array([np.array(xi) for xi in pred_rule])\ntype(yp)\n# type(y[0])",
"_____no_output_____"
],
[
"yp.shape",
"_____no_output_____"
],
[
"yt=np.array([np.array(xi) for xi in pred_true])\ntype(yt)",
"_____no_output_____"
],
[
"yt.shape",
"_____no_output_____"
],
[
"from sklearn.metrics import jaccard_score",
"_____no_output_____"
],
[
"print(\"Jaccard score is : {}\".format(jaccard_score(yt,yp, average= 'weighted')))",
"Jaccard score is : 0.25033619059083945\n"
]
],
[
[
"Our `rule based model` is really bad seeing jaccard similarity",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0c165fc86835f63678259f0dd3c10e928399360 | 3,657 | ipynb | Jupyter Notebook | docs/notebooks/Shapefile_Demo.ipynb | crose26/rosegeomap | 909fc4ecc26344c28dea078ed2bd8d59a5dd5079 | [
"MIT"
] | null | null | null | docs/notebooks/Shapefile_Demo.ipynb | crose26/rosegeomap | 909fc4ecc26344c28dea078ed2bd8d59a5dd5079 | [
"MIT"
] | null | null | null | docs/notebooks/Shapefile_Demo.ipynb | crose26/rosegeomap | 909fc4ecc26344c28dea078ed2bd8d59a5dd5079 | [
"MIT"
] | null | null | null | 21.511765 | 120 | 0.468417 | [
[
[
"import rosegeomap\nfrom rosegeomap.utils import random_string",
"_____no_output_____"
],
[
"m = rosegeomap.Map()\nm",
"_____no_output_____"
],
[
" style = {\n \"stroke\": True,\n \"color\": \"#ff0000\",\n \"weight\": 2,\n \"opacity\": 1,\n \"fill\": True,\n \"fillColor\": \"#00ff00\",\n \"fillOpacity\": 0.5,\n }",
"_____no_output_____"
],
[
"states_geojson = \"C:/Users/14234/lab6demo/examples/data/us_states.geojson\"",
"_____no_output_____"
],
[
"m.add_geojson(states_geojson,style = style, layer_name=\"states\")",
"_____no_output_____"
],
[
" style2= {\n \"stroke\": True,\n \"color\": \"#0000ff\",\n \"weight\": 2,\n \"opacity\": 1,\n \"fill\": True,\n \"fillColor\": \"#00ff00\",\n \"fillOpacity\": .5\n }",
"_____no_output_____"
],
[
"countries_shp = \"C:/Users/14234/lab6demo/examples/data/countries.shp\"",
"_____no_output_____"
],
[
"m.add_shapefile(countries_shp, style=style2, layer_name=\"Shapefile\")",
"_____no_output_____"
],
[
"random_string()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c175654971640632082542f2913cec729b5d3c | 138,721 | ipynb | Jupyter Notebook | EDA/notebooks/pruebas_mutaciones.ipynb | paleomau/MGOL_BOOTCAMP | 8c2b018f49fd12a255ea6f323141260d04d4421d | [
"MIT"
] | null | null | null | EDA/notebooks/pruebas_mutaciones.ipynb | paleomau/MGOL_BOOTCAMP | 8c2b018f49fd12a255ea6f323141260d04d4421d | [
"MIT"
] | null | null | null | EDA/notebooks/pruebas_mutaciones.ipynb | paleomau/MGOL_BOOTCAMP | 8c2b018f49fd12a255ea6f323141260d04d4421d | [
"MIT"
] | null | null | null | 120.313096 | 8,087 | 0.405988 | [
[
[
"import pandas as pd \nimport numpy as np ",
"_____no_output_____"
],
[
"mutaciones_bruto_lung = pd.read_csv(\"D:\\MAURO\\CURSOS\\Master_The_Bridge\\Bridge_Python\\MGOL_BOOTCAMP\\proyecto2\\EDA\\data\\LUNG_alterations_across_samples.tsv\", sep='\\t')\nmutaciones_bruto_lung",
"_____no_output_____"
],
[
"mutaciones_bruto_lung.set_index('Patient ID', inplace=True)\nmutaciones_bruto_lung",
"_____no_output_____"
],
[
"mutaciones_bruto_lung = mutaciones_bruto_lung.iloc[:,2:]\nmutaciones_bruto_lung",
"_____no_output_____"
],
[
"mutaciones_bruto_lung = mutaciones_bruto_lung.replace(['no alteration', 'not profiled'],'0')",
"_____no_output_____"
],
[
"mutaciones_bruto_lung",
"_____no_output_____"
],
[
"mutaciones_bruto_lung1 = mutaciones_bruto_lung[mutaciones_bruto_lung.Altered != 0]\nmutaciones_bruto_lung1",
"_____no_output_____"
],
[
"mutaciones_bruto_lung1.drop(['Altered'], axis=1, inplace = True)\nmutaciones_bruto_lung1",
"C:\\Users\\MAUCRO\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\pandas\\core\\frame.py:4315: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n errors=errors,\n"
],
[
"\nmutaciones_bruto_lung1.replace('0', np.nan, inplace=True)\nmutaciones_bruto_lung1",
"C:\\Users\\MAUCRO\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\pandas\\core\\frame.py:4530: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n method=method,\n"
],
[
"code = ('A', 'B', 'C', 'D', 'E','F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R')\ndef sustituir(df, code):\n code = ('A', 'B', 'C', 'D', 'E','F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R')\n diccionario = {key:value for key, value in zip(mutaciones_bruto_lung1.columns, code)}\n diccionario\n for col in df.columns:\n df.loc[df[col].notnull(), col] = diccionario[col]\n return df \nsustituir(df= mutaciones_bruto_lung1, code = code)",
"C:\\Users\\MAUCRO\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\pandas\\core\\indexing.py:1720: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n self._setitem_single_column(loc, value, pi)\n"
],
[
"mutaciones_bruto_lung1.drop(mutaciones_bruto_lung.iloc[:,5:6], axis = 1, inplace = True)\n\nmutaciones_bruto_lung1",
"C:\\Users\\MAUCRO\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\pandas\\core\\frame.py:4315: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n errors=errors,\n"
],
[
"mutaciones_lung_B = mutaciones_bruto_lung1.copy()",
"_____no_output_____"
],
[
"mutaciones_lung_B[mutaciones_bruto_lung1.isnull()==False] = 1\nmutaciones_lung_B",
"_____no_output_____"
],
[
"mutaciones_lung_B = mutaciones_lung_B.fillna(-1)\nmutaciones_lung_B",
"_____no_output_____"
],
[
"Tipos = mutaciones_bruto_lung1.apply(lambda x: ','.join(x.dropna()), axis=1)\nTipos",
"_____no_output_____"
],
[
"mutaciones_bruto_lung1['Tipos'] = Tipos\nmutaciones_bruto_lung1",
"ipykernel_launcher:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n"
],
[
"item_counts = mutaciones_bruto_lung1[\"Tipos\"].value_counts()\nprint(item_counts)",
"A,J 97\nA,I,J,Q 71\nA,D,J,M 54\nB,K 53\nD,M 49\n ..\nA,B,F,G,I,J,K,N,O,Q 1\nG,I,O,Q 1\nA,D,H,J,M,P 1\nB,C,F,G,I,K,L,N,O,Q 1\nC,F,G,I,L,N,O,Q 1\nName: Tipos, Length: 91, dtype: int64\n"
],
[
"lung_NN = pd.merge(mutaciones_lung_B, pacientes_lung_clean, on=[\"track_name\", \"Patient ID\"])",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c17c34725bb698cc1b3f08378511013730109a | 105,032 | ipynb | Jupyter Notebook | labeling_data_prep.ipynb | patrickcgray/florence_mapping | 99e2d452a0b2d32a75b42a94086c9a7272e8788e | [
"MIT"
] | null | null | null | labeling_data_prep.ipynb | patrickcgray/florence_mapping | 99e2d452a0b2d32a75b42a94086c9a7272e8788e | [
"MIT"
] | null | null | null | labeling_data_prep.ipynb | patrickcgray/florence_mapping | 99e2d452a0b2d32a75b42a94086c9a7272e8788e | [
"MIT"
] | 1 | 2019-02-17T17:22:02.000Z | 2019-02-17T17:22:02.000Z | 46.910228 | 9,396 | 0.583118 | [
[
[
"# Manual Labeling Data Preparation\n\nGenerate the pixels that will be used for train, test, and validation. This keeps pixels a certain distance and ensures they're spatially comprehensive.",
"_____no_output_____"
]
],
[
[
"import rasterio\nimport random\nimport matplotlib.pyplot as plt\nimport os\nimport sys\nimport datetime\nfrom sklearn.utils import class_weight\nmodule_path = os.path.abspath(os.path.join('..'))\nif module_path not in sys.path:\n sys.path.append(module_path)\n \nmodule_path = os.path.abspath(os.path.join('rcnn'))\nif module_path not in sys.path:\n sys.path.append(module_path)\n\nimport utilities as util\nimport importlib\nimport rnn_tiles\nimport rnn_pixels\nimport numpy as np\n\nimport pandas as pd\nimport geopandas as gpd\nfrom shapely.geometry import Point",
"_____no_output_____"
],
[
"importlib.reload(rnn_pixels)\nimportlib.reload(rnn_tiles)\nimportlib.reload(util)",
"_____no_output_____"
]
],
[
[
"Load in the training data",
"_____no_output_____"
]
],
[
[
"lc_labels = rasterio.open('/deep_data/recurrent_data/NLCD_DATA/landcover/NLCD_2011_Land_Cover_L48_20190424.img')\ncanopy_labels = rasterio.open('/deep_data/recurrent_data/NLCD_DATA/canopy/CONUSCartographic_2_8_16/Cartographic/nlcd2011_usfs_conus_canopy_cartographic.img')",
"_____no_output_____"
]
],
[
[
"### Ingest the landsat imagery stacked into yearly seasonal tiles\n\nWe don't really need to do this here but the code is just copied from the rcnn code",
"_____no_output_____"
]
],
[
[
"tiles = {}\nlandsat_datasets = {}\ntiles['028012'] = ['20110103', '20110308', '20110730', '20110831', '20111103']\ntiles['029011'] = ['20110103', '20110308', '20110730', '20110831', '20111018']\ntiles['028011'] = ['20110103', '20110308', '20110831', '20111018', '20111103']\n\nfor tile_number, dates in tiles.items():\n tile_datasets = []\n l8_image_paths = []\n for date in dates:\n l8_image_paths.append('/deep_data/recurrent_data/tile{}/combined/combined{}.tif'.format(tile_number, date))\n for fp in l8_image_paths:\n tile_datasets.append(rasterio.open(fp))\n landsat_datasets[tile_number] = tile_datasets",
"_____no_output_____"
],
[
"tile_size = 13\nclass_count = 6\n\ntile_list = ['028012', '029011', '028011']\nclass_dict = util.indexed_dictionary\n#px = rnn_pixels.make_pixels(tile_size, tile_list)",
"_____no_output_____"
],
[
"#clean_px = rnn_pixels.make_clean_pix(tile_list, tile_size, landsat_datasets,lc_labels, canopy_labels, 100, buffer_pix=1)",
"_____no_output_____"
]
],
[
[
"### Testing for Runtime and Memory Usage",
"_____no_output_____"
]
],
[
[
"%load_ext line_profiler",
"_____no_output_____"
],
[
"%lprun -f rnn_pixels.tvt_pix_locations rnn_pixels.tvt_pix_locations(landsat_datasets, lc_labels, canopy_labels, tile_size, tile_list, clean_pixels, 10, 10, 10, class_dict)",
"_____no_output_____"
],
[
"w_tile_gen = rnn_tiles.rnn_tile_gen(landsat_datasets, lc_labels, canopy_labels, tile_size, class_count)\nw_generator = w_tile_gen.tile_generator(clean_pixels, batch_size=1, flatten=True, canopy=True)",
"_____no_output_____"
],
[
"%lprun -f w_tile_gen.tile_generator w_tile_gen.tile_generator(clean_pixels[:2], batch_size=1, flatten=True, canopy=True)",
"_____no_output_____"
],
[
"%timeit w_tile_gen.tile_generator(clean_pixels[:2], batch_size=1, flatten=True, canopy=True)",
"6.15 ms ± 111 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n"
]
],
[
[
"### Generate Data for TVT",
"_____no_output_____"
]
],
[
[
"px = rnn_pixels.make_pixels(1, tile_list)",
"_____no_output_____"
],
[
"len(px)",
"_____no_output_____"
],
[
"px",
"_____no_output_____"
],
[
"2500000 / 2500 * 45 / 60 / 60",
"_____no_output_____"
],
[
"print(datetime.datetime.now())\npx_to_use = px\nclean_pixels = rnn_pixels.delete_bad_tiles(landsat_datasets,lc_labels, canopy_labels, px_to_use, tile_size, buffer_pix=1)\nprint(datetime.datetime.now())\nprint(len(clean_pixels))",
"2020-05-12 21:46:45.542400\n2020-05-14 07:34:16.573352\n28729362\n"
],
[
"print(datetime.datetime.now())\npx_to_use = px\nclean_pixels = rnn_pixels.delete_bad_tiles(landsat_datasets,lc_labels, canopy_labels, px_to_use, tile_size, buffer_pix=1)\nprint(datetime.datetime.now())\nprint(len(clean_pixels))",
"2020-02-07 16:03:29.854261\n2020-02-08 19:57:27.823491\n24268666\n"
],
[
"#clean_pixels_subset = clean_pixels[:10000]",
"_____no_output_____"
],
[
"print(datetime.datetime.now())\ntvt_pixels = rnn_pixels.tvt_pix_locations(landsat_datasets, lc_labels, \n canopy_labels, tile_size, tile_list, clean_pixels, 150, 150, 1500, class_dict)\nprint(datetime.datetime.now())\nprint('test:', len(tvt_pixels[0]), 'val:',len(tvt_pixels[1]), 'train:',len(tvt_pixels[2]))",
"2020-05-14 07:34:16.581719\nBeginning TVT pixel creation.\nIterating through data and clipping for balance.\n\nProcessing Complete.\n2020-05-14 08:59:20.571700\ntest: 900 val: 900 train: 9000\n"
],
[
"print(datetime.datetime.now())\ntvt_pixels = rnn_pixels.tvt_pix_locations(landsat_datasets, lc_labels, \n canopy_labels, tile_size, tile_list, clean_pixels, 150, 150, 1500, class_dict)\nprint(datetime.datetime.now())\nprint('test:', len(tvt_pixels[0]), 'val:',len(tvt_pixels[1]), 'train:',len(tvt_pixels[2]))",
"2020-02-08 19:57:27.832484\nBeginning TVT pixel creation.\nIterating through data and clipping for balance.\n\nProcessing Complete.\n2020-02-09 05:26:35.530992\ntest: 900 val: 900 train: 9000\n"
]
],
[
[
"#### See if Data is Actually Balanced",
"_____no_output_____"
]
],
[
[
"class_count = 6\n\npixels = tvt_pixels[2]\n\n# gets balanced pixels locations \nw_tile_gen = rnn_tiles.rnn_tile_gen(landsat_datasets, lc_labels, canopy_labels, tile_size, class_count)\nw_generator = w_tile_gen.tile_generator(pixels, batch_size=1, flatten=True, canopy=True)\ntotal_labels = list()\ncount = 0\n#buckets = {0:[], 1:[], 2:[], 3:[], 4:[], 5:[]}\nbuckets = {0:[], 1:[], 2:[], 3:[], 4:[], 5:[]}\n\nwhile count < len(pixels):\n image_b, label_b = next(w_generator)\n #print(image_b['tile_input'].shape)\n buckets[np.argmax(label_b[\"landcover\"])].append({\n \"pixel_loc\" : pixels[count][0],\n \"tile_name\" : pixels[count][1],\n \"landcover\" : np.argmax(label_b[\"landcover\"]),\n \"canopy\" : float(label_b[\"canopy\"])\n }) # appends pixels to dictionary\n count+=1\ncount = 0 \nfor z, j in buckets.items():\n print(z, len(j))\n count += len(j)\nprint(count) ",
"0 1500\n1 1500\n2 1500\n3 1500\n4 1500\n5 1500\n9000\n"
],
[
"# a run with 10,000,000 pixels \n4 hours for delete bad pixels and 2.5 hours to create tvt\n\nended up at\n\n0 1500\n1 1500\n2 1500\n3 1500\n4 204\n5 1500\n7704",
"_____no_output_____"
]
],
[
[
"run through the pixels, buffer each pixel and add it to a geopandas dataset, convert that CRS to 4326 then save that geopandas dataset as a shapefile ",
"_____no_output_____"
]
],
[
[
"#count_per_class = 3000\ncount_per_class = 1500 # 1667 * 6 ~= 10,000\npixel_coords = []\n\nfor lc_class in buckets.keys():\n for i, pixel in enumerate(buckets[lc_class]):\n landsat_ds = landsat_datasets[pixel[\"tile_name\"]][0] # get the stack of ls datasets from that location and take the first\n x, y = landsat_ds.xy(pixel[\"pixel_loc\"][0], pixel[\"pixel_loc\"][1])\n pixel_coords.append({\n \"x\" : x,\n \"y\" : y,\n \"row\" : pixel[\"pixel_loc\"][0],\n \"col\" : pixel[\"pixel_loc\"][1],\n \"label\" : pixel[\"landcover\"],\n \"canopy\" : pixel[\"canopy\"],\n \"tile_name\" : pixel[\"tile_name\"]\n })\n if i > count_per_class:\n break",
"_____no_output_____"
],
[
"# create a dataframe from the pixel coordinates\ndf = pd.DataFrame(pixel_coords)\ndf.hist(column=\"label\")",
"_____no_output_____"
],
[
"landsat_datasets[\"029011\"][0].crs",
"_____no_output_____"
],
[
"gdf = gpd.GeoDataFrame(df, geometry = gpd.points_from_xy(df.x, df.y), crs=landsat_datasets[\"028011\"][0].crs)\ngdf.plot()",
"_____no_output_____"
],
[
"# buffer by 15 meters to make 30x30 pixel\nbuffer = gdf.buffer(15)\n\nenvelope = buffer.envelope \ngdf.geometry = envelope\ngdf.head()",
"_____no_output_____"
],
[
"gdf = gdf.to_crs(\"+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs\")\ngdf.head()",
"_____no_output_____"
],
[
"gdf.hist(column=\"label\", bins=11)",
"_____no_output_____"
],
[
"gdf.to_file(\"train_buffered_points140520.shp\",driver='ESRI Shapefile')",
"_____no_output_____"
],
[
"reopened_gdf = gpd.read_file(\"buffered_points.shp\")\nreopened_gdf.head()",
"_____no_output_____"
],
[
"reopened_gdf.hist(column=\"canopy\")",
"_____no_output_____"
],
[
"reopened_gdf.crs",
"_____no_output_____"
],
[
"reopened_gdf.head()",
"_____no_output_____"
],
[
"for index, row in df.iterrows():\n print(row['tile_name'])\n print(row['canopy'])\n print(row['geometry'])\n print(row['geometry'].centroid)\n break",
"028012\n0.0\nPOLYGON ((1761345 1488015, 1761375 1488015, 1761375 1488045, 1761345 1488045, 1761345 1488015))\nPOINT (1761360 1488030)\n"
],
[
"pd.cut(reopened_gdf['canopy'], 10).value_counts().sort_index()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c1815267852864e87118139c39004fb4ef467d | 435,296 | ipynb | Jupyter Notebook | UNET/UNET_Build.ipynb | hansong0219/Advanced-Deep-learning-Notebooks | 6107fdf54332411ec7008345faafc4a2bc08ae12 | [
"MIT"
] | 1 | 2020-10-17T13:55:01.000Z | 2020-10-17T13:55:01.000Z | UNET/UNET_Build.ipynb | hansong0219/Advanced-Deep-learning-Notebooks | 6107fdf54332411ec7008345faafc4a2bc08ae12 | [
"MIT"
] | null | null | null | UNET/UNET_Build.ipynb | hansong0219/Advanced-Deep-learning-Notebooks | 6107fdf54332411ec7008345faafc4a2bc08ae12 | [
"MIT"
] | null | null | null | 1,351.850932 | 415,914 | 0.950064 | [
[
[
"<a href=\"https://colab.research.google.com/github/hansong0219/Advanced-DeepLearning-Study/blob/master/UNET/UNET_Build.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport os\nimport sys\nfrom tensorflow.keras.layers import Input, Dropout, Concatenate\nfrom tensorflow.keras.layers import Conv2DTranspose, Conv2D\nfrom tensorflow.keras.models import Sequential, Model\nfrom tensorflow.keras.layers import LeakyReLU, Activation\nfrom tensorflow.keras.layers import BatchNormalization\nfrom tensorflow.keras.optimizers import Adam\nfrom tensorflow.keras.utils import plot_model\nfrom tensorflow.keras.losses import BinaryCrossentropy\nimport matplotlib.pyplot as plt\nimport tensorflow as tf",
"_____no_output_____"
],
[
"def down_sample(layer_inputs,filters, size, apply_batchnorm=True):\n initializer = tf.random_normal_initializer(0.,0.02)\n d = Conv2D(filters, size, strides=2,padding='same', kernel_initializer=initializer, use_bias=False)(layer_inputs)\n if apply_batchnorm:\n d = BatchNormalization()(d)\n \n d = LeakyReLU(alpha=0.2)(d)\n return d\n\ndef up_sample(layer_inputs, skip_input,filters, size, dropout_rate=0):\n initializer = tf.random_normal_initializer(0.,0.02) \n u = Conv2DTranspose(filters, size, strides=2,padding='same', kernel_initializer=initializer,use_bias=False)(layer_inputs)\n if dropout_rate:\n u = Dropout(dropout_rate)(u)\n \n u = tf.keras.layers.ReLU()(u)\n u = Concatenate()([u, skip_input])\n return u",
"_____no_output_____"
],
[
"def Build_UNET():\n input_shape = (256,256,3)\n output_channel = 3\n inputs = Input(shape=input_shape,name=\"inputs\")\n \n d1 = down_sample(inputs, 64, 4, apply_batchnorm=False) #(128,128,3)\n d2 = down_sample(d1, 128, 4) #(64,64,128)\n d3 = down_sample(d2, 256, 4)\n d4 = down_sample(d3, 512, 4)\n d5 = down_sample(d4, 512, 4)\n d6 = down_sample(d5, 512, 4)\n d7 = down_sample(d6, 512, 4)\n d8 = down_sample(d7, 512, 4)\n \n u7 = up_sample(d8, d7, 512, 4, dropout_rate = 0.5)\n u6 = up_sample(u7, d6, 512, 4, dropout_rate = 0.5)\n u5 = up_sample(u6, d5, 512, 4, dropout_rate = 0.5)\n u4 = up_sample(u5, d4, 512, 4)\n u3 = up_sample(u4, d3, 256, 4)\n u2 = up_sample(u3, d2, 128, 4)\n u1 = up_sample(u2, d1, 64, 4)\n \n initializer = tf.random_normal_initializer(0.,0.02)\n outputs = Conv2DTranspose(output_channel,\n kernel_size=4, \n strides=2, \n padding='same', \n kernel_initializer=initializer, \n activation='tanh')(u1)\n \n return Model(inputs, outputs)",
"_____no_output_____"
],
[
"unet = Build_UNET()",
"_____no_output_____"
],
[
"optimizer = Adam(1e-4, beta_1=0.5)\nunet.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'])",
"_____no_output_____"
],
[
"unet.summary()",
"Model: \"functional_1\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninputs (InputLayer) [(None, 256, 256, 3) 0 \n__________________________________________________________________________________________________\nconv2d (Conv2D) (None, 128, 128, 64) 3072 inputs[0][0] \n__________________________________________________________________________________________________\nleaky_re_lu (LeakyReLU) (None, 128, 128, 64) 0 conv2d[0][0] \n__________________________________________________________________________________________________\nconv2d_1 (Conv2D) (None, 64, 64, 128) 131072 leaky_re_lu[0][0] \n__________________________________________________________________________________________________\nbatch_normalization (BatchNorma (None, 64, 64, 128) 512 conv2d_1[0][0] \n__________________________________________________________________________________________________\nleaky_re_lu_1 (LeakyReLU) (None, 64, 64, 128) 0 batch_normalization[0][0] \n__________________________________________________________________________________________________\nconv2d_2 (Conv2D) (None, 32, 32, 256) 524288 leaky_re_lu_1[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_1 (BatchNor (None, 32, 32, 256) 1024 conv2d_2[0][0] \n__________________________________________________________________________________________________\nleaky_re_lu_2 (LeakyReLU) (None, 32, 32, 256) 0 batch_normalization_1[0][0] \n__________________________________________________________________________________________________\nconv2d_3 (Conv2D) (None, 16, 16, 512) 2097152 leaky_re_lu_2[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_2 (BatchNor (None, 16, 16, 512) 2048 conv2d_3[0][0] \n__________________________________________________________________________________________________\nleaky_re_lu_3 (LeakyReLU) (None, 16, 16, 512) 0 batch_normalization_2[0][0] \n__________________________________________________________________________________________________\nconv2d_4 (Conv2D) (None, 8, 8, 512) 4194304 leaky_re_lu_3[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_3 (BatchNor (None, 8, 8, 512) 2048 conv2d_4[0][0] \n__________________________________________________________________________________________________\nleaky_re_lu_4 (LeakyReLU) (None, 8, 8, 512) 0 batch_normalization_3[0][0] \n__________________________________________________________________________________________________\nconv2d_5 (Conv2D) (None, 4, 4, 512) 4194304 leaky_re_lu_4[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_4 (BatchNor (None, 4, 4, 512) 2048 conv2d_5[0][0] \n__________________________________________________________________________________________________\nleaky_re_lu_5 (LeakyReLU) (None, 4, 4, 512) 0 batch_normalization_4[0][0] \n__________________________________________________________________________________________________\nconv2d_6 (Conv2D) (None, 2, 2, 512) 4194304 leaky_re_lu_5[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_5 (BatchNor (None, 2, 2, 512) 2048 conv2d_6[0][0] \n__________________________________________________________________________________________________\nleaky_re_lu_6 (LeakyReLU) (None, 2, 2, 512) 0 batch_normalization_5[0][0] \n__________________________________________________________________________________________________\nconv2d_7 (Conv2D) (None, 1, 1, 512) 4194304 leaky_re_lu_6[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_6 (BatchNor (None, 1, 1, 512) 2048 conv2d_7[0][0] \n__________________________________________________________________________________________________\nleaky_re_lu_7 (LeakyReLU) (None, 1, 1, 512) 0 batch_normalization_6[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose (Conv2DTranspo (None, 2, 2, 512) 4194304 leaky_re_lu_7[0][0] \n__________________________________________________________________________________________________\ndropout (Dropout) (None, 2, 2, 512) 0 conv2d_transpose[0][0] \n__________________________________________________________________________________________________\nre_lu (ReLU) (None, 2, 2, 512) 0 dropout[0][0] \n__________________________________________________________________________________________________\nconcatenate (Concatenate) (None, 2, 2, 1024) 0 re_lu[0][0] \n leaky_re_lu_6[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_1 (Conv2DTrans (None, 4, 4, 512) 8388608 concatenate[0][0] \n__________________________________________________________________________________________________\ndropout_1 (Dropout) (None, 4, 4, 512) 0 conv2d_transpose_1[0][0] \n__________________________________________________________________________________________________\nre_lu_1 (ReLU) (None, 4, 4, 512) 0 dropout_1[0][0] \n__________________________________________________________________________________________________\nconcatenate_1 (Concatenate) (None, 4, 4, 1024) 0 re_lu_1[0][0] \n leaky_re_lu_5[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_2 (Conv2DTrans (None, 8, 8, 512) 8388608 concatenate_1[0][0] \n__________________________________________________________________________________________________\ndropout_2 (Dropout) (None, 8, 8, 512) 0 conv2d_transpose_2[0][0] \n__________________________________________________________________________________________________\nre_lu_2 (ReLU) (None, 8, 8, 512) 0 dropout_2[0][0] \n__________________________________________________________________________________________________\nconcatenate_2 (Concatenate) (None, 8, 8, 1024) 0 re_lu_2[0][0] \n leaky_re_lu_4[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_3 (Conv2DTrans (None, 16, 16, 512) 8388608 concatenate_2[0][0] \n__________________________________________________________________________________________________\nre_lu_3 (ReLU) (None, 16, 16, 512) 0 conv2d_transpose_3[0][0] \n__________________________________________________________________________________________________\nconcatenate_3 (Concatenate) (None, 16, 16, 1024) 0 re_lu_3[0][0] \n leaky_re_lu_3[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_4 (Conv2DTrans (None, 32, 32, 256) 4194304 concatenate_3[0][0] \n__________________________________________________________________________________________________\nre_lu_4 (ReLU) (None, 32, 32, 256) 0 conv2d_transpose_4[0][0] \n__________________________________________________________________________________________________\nconcatenate_4 (Concatenate) (None, 32, 32, 512) 0 re_lu_4[0][0] \n leaky_re_lu_2[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_5 (Conv2DTrans (None, 64, 64, 128) 1048576 concatenate_4[0][0] \n__________________________________________________________________________________________________\nre_lu_5 (ReLU) (None, 64, 64, 128) 0 conv2d_transpose_5[0][0] \n__________________________________________________________________________________________________\nconcatenate_5 (Concatenate) (None, 64, 64, 256) 0 re_lu_5[0][0] \n leaky_re_lu_1[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_6 (Conv2DTrans (None, 128, 128, 64) 262144 concatenate_5[0][0] \n__________________________________________________________________________________________________\nre_lu_6 (ReLU) (None, 128, 128, 64) 0 conv2d_transpose_6[0][0] \n__________________________________________________________________________________________________\nconcatenate_6 (Concatenate) (None, 128, 128, 128 0 re_lu_6[0][0] \n leaky_re_lu[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_7 (Conv2DTrans (None, 256, 256, 3) 6147 concatenate_6[0][0] \n==================================================================================================\nTotal params: 54,415,875\nTrainable params: 54,409,987\nNon-trainable params: 5,888\n__________________________________________________________________________________________________\n"
],
[
"plot_model(unet, show_shapes=True, dpi=64)",
"_____no_output_____"
],
[
"loss=BinaryCrossentropy(from_logits=True)\noptimizer = Adam(1e-4, beta_1=0.5)\nunet.compile(optimizer=optimizer, loss='mse', metrics=['accuracy'])",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c19e1ac076cb97cce1195230452657d6dcfd61 | 265,548 | ipynb | Jupyter Notebook | Kobe Bryant Tribute.ipynb | chesterking123/Kobe-Bryant-Tribute-Word-Cloud | b169abde67f8e1e7a6f1ee02730885c55cce69ca | [
"Apache-2.0"
] | 3 | 2020-01-31T06:02:52.000Z | 2020-07-03T07:00:36.000Z | Kobe Bryant Tribute.ipynb | chesterking123/Kobe-Bryant-Tribute-Word-Cloud | b169abde67f8e1e7a6f1ee02730885c55cce69ca | [
"Apache-2.0"
] | null | null | null | Kobe Bryant Tribute.ipynb | chesterking123/Kobe-Bryant-Tribute-Word-Cloud | b169abde67f8e1e7a6f1ee02730885c55cce69ca | [
"Apache-2.0"
] | null | null | null | 414.271451 | 166,316 | 0.929248 | [
[
[
"## Kobe Bryant Tribute Word Cloud",
"_____no_output_____"
],
[
"@Author: Deep Contractor",
"_____no_output_____"
]
],
[
[
"import pandas as pd \nimport numpy as np\nfrom os import path, getcwd\nimport matplotlib.pyplot as plt \nfrom wordcloud import WordCloud, ImageColorGenerator, STOPWORDS\nfrom PIL import Image",
"_____no_output_____"
],
[
"from twitterscraper import query_tweets \nimport datetime as dt",
"INFO: {'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 6.1; x64; fr; rv:1.9.2.13) Gecko/20101203 Firebird/3.6.13'}\n"
],
[
"begin_date = dt.date(2020,1,26)\nend_date = dt.date(2020,1,27)\nlang = 'english'\nlimit = 10000",
"_____no_output_____"
],
[
"tweets = query_tweets(\"#Kobe\", begindate = begin_date, enddate=end_date, limit=limit, lang=lang)",
"INFO: queries: ['#Kobe since:2020-01-26 until:2020-01-27']\nINFO: Got 10009 tweets (10009 new).\n"
],
[
"df = pd.DataFrame(i.__dict__ for i in tweets)",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.shape",
"_____no_output_____"
],
[
"df.to_csv('tweets.csv')",
"_____no_output_____"
],
[
"user_tweets = pd.read_csv('tweets.csv')",
"_____no_output_____"
],
[
"user_tweets.shape",
"_____no_output_____"
],
[
"tweet_words = \" \".join(user_tweets.text.drop_duplicates())",
"_____no_output_____"
],
[
"#removing punchuation\nfrom nltk.tokenize import RegexpTokenizer\ntokenizer = RegexpTokenizer(r'\\w+')\nwords = tokenizer.tokenize(tweet_words)",
"_____no_output_____"
],
[
"import nltk\nnltk.download('stopwords')\nnltk.download('punkt')",
"[nltk_data] Downloading package stopwords to\n[nltk_data] C:\\Users\\deepc\\AppData\\Roaming\\nltk_data...\n[nltk_data] Package stopwords is already up-to-date!\n[nltk_data] Downloading package punkt to\n[nltk_data] C:\\Users\\deepc\\AppData\\Roaming\\nltk_data...\n[nltk_data] Package punkt is already up-to-date!\n"
],
[
"words_1 = ' '.join(words)",
"_____no_output_____"
],
[
"#removing stop words\nfrom nltk.corpus import stopwords\nfrom nltk import word_tokenize\nstop_words = set(stopwords.words('english'))\ntext = word_tokenize(words_1)",
"_____no_output_____"
],
[
"text=[x.lower() for x in text]",
"_____no_output_____"
],
[
"unwanted = ['kobe','bryant','kobebryant','https','helicopter', 'crash','today','rip','accident','daughter','death','black','pic','com']",
"_____no_output_____"
],
[
"text = \" \".join([w for w in text if w not in unwanted])",
"_____no_output_____"
],
[
"no_digits=[]\nfor i in text:\n if not i.isdigit():\n no_digits.append(i)\nresult = ''.join(no_digits)",
"_____no_output_____"
],
[
"import nltk\nnltk.download('words')\nwords = set(nltk.corpus.words.words())\nsent = result\nfinal =\" \".join(w for w in nltk.wordpunct_tokenize(sent) \\\n if w.lower() in words or not w.isalpha())",
"[nltk_data] Downloading package words to\n[nltk_data] C:\\Users\\deepc\\AppData\\Roaming\\nltk_data...\n[nltk_data] Package words is already up-to-date!\n"
],
[
"final",
"_____no_output_____"
],
[
"import os\np = os.getcwd()\nmask_logo = np.array(Image.open(path.join(p,'kobe.png')))",
"_____no_output_____"
],
[
"stopwords = set(STOPWORDS)\nstopwords.add('www')\nstopwords.add('instagram')\nstopwords.add('twitter')\nstopwords.add('igshid')\nstopwords.add('status')\nwc = WordCloud(background_color=\"white\", max_words=2000, mask=mask_logo,stopwords=stopwords)\n# generate word cloud\nwc.generate(text)\n# store to file\n# wc.to_file(path.join(p, \"kobe.png\"))\n# show\nplt.imshow(wc, interpolation='bilinear')\nplt.axis(\"off\")\nplt.figure()\nimage_colors = ImageColorGenerator(mask_logo)\nwc.recolor(color_func=image_colors).to_file('kobe.png')\nplt.figure(figsize=[10,10])\nplt.imshow(wc.recolor(color_func=image_colors), interpolation=\"bilinear\")\nplt.axis(\"off\")\nplt.show()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c1a810d998fc23ee66205a7ee8e4b3284d3270 | 105,677 | ipynb | Jupyter Notebook | examples/Gleitlager/.ipynb_checkpoints/Gleitlager_V1-checkpoint.ipynb | uwe-iben/torchphysics | f0a56539cff331d49caaa90bc2fdd0d238b298f8 | [
"Apache-2.0"
] | null | null | null | examples/Gleitlager/.ipynb_checkpoints/Gleitlager_V1-checkpoint.ipynb | uwe-iben/torchphysics | f0a56539cff331d49caaa90bc2fdd0d238b298f8 | [
"Apache-2.0"
] | null | null | null | examples/Gleitlager/.ipynb_checkpoints/Gleitlager_V1-checkpoint.ipynb | uwe-iben/torchphysics | f0a56539cff331d49caaa90bc2fdd0d238b298f8 | [
"Apache-2.0"
] | null | null | null | 214.79065 | 70,036 | 0.891272 | [
[
[
"import torch\nimport torchphysics as tp\nimport math\nimport numpy as np\nimport pytorch_lightning as pl",
"_____no_output_____"
],
[
"print('Tutorial zu TorchPhysics:')\nprint('https://torchphysics.readthedocs.io/en/latest/tutorial/tutorial_start.html')",
"Tutorial zu TorchPhysics:\nhttps://torchphysics.readthedocs.io/en/latest/tutorial/tutorial_start.html\n"
],
[
"from IPython.display import Image, Math, Latex\nfrom IPython.core.display import HTML \nImage(filename='bearing.png',width = 500, height = 250)",
"_____no_output_____"
],
[
"# First define all parameters:\nh_0 = 16.e-06 #m = 16 um\ndh = 14e-06 #m = 14 um \nD = 0.01 #m = 10 mm \nL = np.pi*D # Länge von Gebiet\nu_m = 0.26 #m/s 0.26\nbeta = 2.2*1e-08 # 2.2e-08 m^2/N\nnu_0 = 1.5e-03 # Pa·s = 1.5 mPa·s\n# lower and upper bounds of parameters\nnu0 = 1.0e-03 # Viskosität\nnu1 = 2.5e-03\num0 = 0.2 # Geschwindigkeit\num1 = 0.4\ndh0 = 10e-6 # Spaltvariaton\ndh1 = 15e-6\np_0 = 1e+5 # 1e+5 N/m^2 = 1 bar\np_rel = 0 # Relativdruck \np_skal = 100000 #Skalierungsdruck für (-1,1) Bereich",
"_____no_output_____"
],
[
"# define h:\ndef h(x, dh): # <- hier jetzt auch dh als input\n return h_0 + dh * torch.cos(2*x/D) # x in [0,pi*D] \n\n# and compute h':\ndef h_x(x, dh): # <- hier jetzt auch dh als input\n return -2.0*dh/D * torch.sin(2*x/D) # x in [0,pi*D]",
"_____no_output_____"
],
[
"# define the function of the viscosity.\n# Here we need torch.tensors, since the function will be evaluated in the pde.\n# At the beginng the model will have values close to 0, \n# therefore the viscosity will also be close to zero. \n# This will make the pde condition unstable, because we divide by nu.\n# For now set values smaller then 1e-06 to 1e-06 \ndef nu_func(nu, p):\n out = nu * torch.exp(beta * p*p_skal)\n return torch.clamp(out, min=1e-06)\n\ndef nu_x_func(nu,p):\n out = nu* beta*p_skal*torch.exp(beta*p*p_skal)\n return out ",
"_____no_output_____"
],
[
"# Variables:\nx = tp.spaces.R1('x')\nnu = tp.spaces.R1('nu')\num = tp.spaces.R1('um')\ndh = tp.spaces.R1('dh')\n# output\np = tp.spaces.R1('p')",
"_____no_output_____"
],
[
"A_x = tp.domains.Interval(x, 0, L)\nA_nu = tp.domains.Interval(nu, nu0, nu1)\nA_um = tp.domains.Interval(um, um0, um1)\nA_dh = tp.domains.Interval(dh, dh0, dh1)",
"_____no_output_____"
],
[
"#inner_sampler = tp.samplers.AdaptiveRejectionSampler(A_x*A_nu*A_um*A_dh, n_points = 50000)\ninner_sampler = tp.samplers.RandomUniformSampler(A_x*A_nu*A_um*A_dh, n_points = 10000)\n# density: 4 Punkte pro Einheitsvolumen\n# Boundaries\nboundary_v_sampler = tp.samplers.RandomUniformSampler(A_x.boundary*A_nu*A_um*A_dh, n_points = 5000)",
"_____no_output_____"
],
[
"#tp.utils.scatter(nu*um*dh, inner_sampler, boundary_v_sampler)",
"_____no_output_____"
],
[
"model = tp.models.Sequential(\n tp.models.NormalizationLayer(A_x*A_nu*A_um*A_dh),\n tp.models.FCN(input_space=x*nu*um*dh, output_space=p, hidden=(50,50,50))\n)",
"_____no_output_____"
],
[
"display(Math(r'h(x)\\frac{d^2 \\tilde{p}}{d x^2} +\\left( 3 \\frac{dh}{dx} - \\frac{h}{\\nu} \\frac{d \\nu}{d x} \\\n \\right) \\frac{d \\tilde{p}}{d x} = \\frac{6 u_m \\nu}{p_0 h^2} \\frac{d h}{d x}\\quad \\mbox{with} \\\n \\quad \\tilde{p}=\\frac{p}{p_{skal}} '))",
"_____no_output_____"
],
[
"from torchphysics.utils import grad\n# Alternativ tp.utils.grad\ndef pde(nu, p, x, um, dh): # <- brauchen jetzt dh und um auch als input\n # evaluate the viscosity and their first derivative\n nu = nu_func(nu,p)\n nu_x = nu_x_func(nu,p)\n # implement the PDE:\n # right hand site\n rs = 6*um*nu #<- hier jetzt um statt u_m, da deine Variable so heißt \n # h und h_x mit Input dh:\n h_out = h(x, dh) # nur einmal auswerten\n h_x_out = h_x(x, dh) # nur einmal auswerten\n #out = h_out * grad(grad(p,x),x)- rs*h_x_out/h_out/h_out/p_skal\n out = h_out*grad(grad(p,x),x) + (3*h_x_out -h_out/nu*nu_x)*grad(p,x) - rs*h_x_out/h_out/h_out/p_skal\n return out",
"_____no_output_____"
],
[
"pde_condition = tp.conditions.PINNCondition(module=model,\n sampler=inner_sampler,\n residual_fn=pde,\n name='pde_condition')",
"_____no_output_____"
],
[
"\n# Hier brauchen wir immer nur den output des modells, da die Bedingung nicht\n# von nu, um oder dh abhängt.\n\ndef bc_fun(p):\n return p-p_rel \n\nboundary_condition = tp.conditions.PINNCondition(module = model,\n sampler = boundary_v_sampler,\n residual_fn = bc_fun,\n name = 'pde_bc')",
"_____no_output_____"
],
[
"opt_setting = tp.solver.OptimizerSetting(torch.optim.AdamW, lr=1e-2) #SGD, LBFGS\nsolver = tp.solver.Solver((pde_condition, boundary_condition),optimizer_setting = opt_setting)",
"_____no_output_____"
],
[
"trainer = pl.Trainer(gpus='-1' if torch.cuda.is_available() else None,\n num_sanity_val_steps=0,\n benchmark=True,\n log_every_n_steps=1,\n max_steps=1000,\n #logger=False, zur Visualisierung im tensorboard\n checkpoint_callback=False\n )\n\ntrainer.fit(solver)",
"GPU available: False, used: False\nTPU available: False, using: 0 TPU cores\nIPU available: False, using: 0 IPUs\n\n | Name | Type | Params\n------------------------------------------------\n0 | train_conditions | ModuleList | 5.4 K \n1 | val_conditions | ModuleList | 0 \n------------------------------------------------\n5.4 K Trainable params\n0 Non-trainable params\n5.4 K Total params\n0.022 Total estimated model params size (MB)\nC:\\Users\\inu2sh\\Desktop\\Torch-Physics\\_venv\\lib\\site-packages\\pytorch_lightning\\trainer\\data_loading.py:105: UserWarning: The dataloader, train dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 8 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.\n rank_zero_warn(\nC:\\Users\\inu2sh\\Desktop\\Torch-Physics\\_venv\\lib\\site-packages\\pytorch_lightning\\trainer\\data_loading.py:105: UserWarning: The dataloader, val dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 8 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.\n rank_zero_warn(\n"
],
[
"opt_setting = tp.solver.OptimizerSetting(torch.optim.LBFGS, lr=1e-2) #SGD, LBFGS\nsolver = tp.solver.Solver((pde_condition, boundary_condition),optimizer_setting = opt_setting)\ntrainer = pl.Trainer(gpus='-1' if torch.cuda.is_available() else None,\n num_sanity_val_steps=0,\n benchmark=True,\n log_every_n_steps=1,\n max_steps=100,\n #logger=False, zur Visualisierung im tensorboard\n checkpoint_callback=False\n )\n\ntrainer.fit(solver)",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nsolver = solver.to('cpu')\nprint('nu0= ',nu0,' nu1= ',nu1)\nprint('dh0= ',dh0, 'dh1= ', dh1, 'm')\nprint('um0= ', um0, 'um1= ',um1, 'm/s')\n# Parameter definieren für Plot\nnu_plot = 2.0e-3 \num_plot = 0.4\ndh_plot = 14.25e-06\nprint('Minimale Spalthöhe =', h_0-dh_plot)\nplot_sampler = tp.samplers.PlotSampler(plot_domain=A_x, n_points=600, device='cpu',\n data_for_other_variables={'nu':nu_plot,\n 'um':um_plot,'dh':dh_plot})\nif nu0<=nu_plot and nu_plot<=nu1 and dh0<=dh_plot and dh_plot<=dh1 and um0<=um_plot and um_plot<=um1:\n fig = tp.utils.plot(model,lambda p:p,plot_sampler) \nelse:\n print('Ausserhalb des Trainingsbereiches')\nprint('Skalierungsfaktor = ', p_skal)\nplt.savefig(f'p_{um}.png', dpi=300)",
"nu0= 0.001 nu1= 0.0025\ndh0= 1e-05 dh1= 1.5e-05 m\num0= 0.2 um1= 0.4 m/s\nMinimale Spalthöhe = 1.7499999999999985e-06\nSkalierungsfaktor = 100000\n"
],
[
"import xlsxwriter\n#erstellen eines Workbook Objektes mit dem Dateinamen \"Gleitlager_***.xlsx\"\nworkbook = xlsxwriter.Workbook('Gleitlager.xlsx')\nworksheet = workbook.add_worksheet('Tabelle_1')\n\nworksheet.write('Ergebnistabelle Gleitlager')\nworksheet.write('nu', 'dh', 'um')\n\nworkbook.close()",
"_____no_output_____"
],
[
"import winsound\nfrequency = 2500 # Set Frequency To 2500 Hertz\nduration = 1000 # Set Duration To 1000 ms == 1 second\nwinsound.Beep(frequency, duration)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c1caac770df0e0523282fe58b3cd8bf2a23fc7 | 29,757 | ipynb | Jupyter Notebook | NaturalLanguajeProcessing.ipynb | qwerteleven/ToolsDeepLearning | d0a79a99f6a6d348c382f87661667f3b03936cea | [
"MIT"
] | null | null | null | NaturalLanguajeProcessing.ipynb | qwerteleven/ToolsDeepLearning | d0a79a99f6a6d348c382f87661667f3b03936cea | [
"MIT"
] | null | null | null | NaturalLanguajeProcessing.ipynb | qwerteleven/ToolsDeepLearning | d0a79a99f6a6d348c382f87661667f3b03936cea | [
"MIT"
] | null | null | null | 46.787736 | 472 | 0.407165 | [
[
[
"# <center> Natural Language Processing RNN y LSTM. </center>\n\n",
"_____no_output_____"
]
],
[
[
"from tensorflow.keras import *\nfrom tensorflow.keras.layers import *\nfrom tensorflow.keras.preprocessing.text import *\nfrom tensorflow.keras.utils import *\nimport tensorflow as tf\n\nimport numpy as np",
"_____no_output_____"
],
[
"data = [\"\"\"Con diez cañones por banda, \nviento en popa a toda vela, \nno corta el mar, sino vuela \nun velero bergantín;\n\nbajel pirata que llaman, \npor su bravura, el Temido, \nen todo mar conocido \ndel uno al otro confín.\n\nLa luna en el mar riela, \nen la lona gime el viento \ny alza en blando movimiento \nolas de plata y azul; \n\ny va el capitán pirata, \ncantando alegre en la popa, \nAsia a un lado, al otro Europa, \ny allá a su frente Estambul;\n\n—«Navega velero mío, \n sin temor, \nque ni enemigo navío, \nni tormenta, ni bonanza, \ntu rumbo a torcer alcanza, \nni a sujetar tu valor.\"\"\"]",
"_____no_output_____"
],
[
"\ntokens = tf.keras.preprocessing.text.Tokenizer(106, filters='!\"#$%&()*+,-./:;<=>?@[\\\\]^_`{|}~\\t\\n—«')\ntokens.fit_on_texts(data)\n",
"_____no_output_____"
],
[
"X = np.ravel(np.asarray(tokens.texts_to_sequences(data)))\nY = X[1:]\nY = np.append(Y,0)\nY = to_categorical(Y)\n\n",
"_____no_output_____"
],
[
"model = Sequential()\n\nmodel.add(Embedding(106, 30))\nmodel.add(LSTM(30))\nmodel.add(Dense(73, activation = 'softmax'))\n\nmodel.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])\n\nmodel.fit(X, Y, epochs=200)\n",
"Epoch 1/200\n4/4 [==============================] - 0s 4ms/step - loss: 4.2911 - accuracy: 0.0094\nEpoch 2/200\n4/4 [==============================] - 0s 4ms/step - loss: 4.2883 - accuracy: 0.0566\nEpoch 3/200\n4/4 [==============================] - 0s 4ms/step - loss: 4.2861 - accuracy: 0.1038\nEpoch 4/200\n4/4 [==============================] - 0s 4ms/step - loss: 4.2836 - accuracy: 0.1038\nEpoch 5/200\n4/4 [==============================] - 0s 4ms/step - loss: 4.2814 - accuracy: 0.1132\nEpoch 6/200\n4/4 [==============================] - 0s 4ms/step - loss: 4.2791 - accuracy: 0.1226\nEpoch 7/200\n4/4 [==============================] - 0s 4ms/step - loss: 4.2766 - accuracy: 0.1132\nEpoch 8/200\n4/4 [==============================] - 0s 4ms/step - loss: 4.2742 - accuracy: 0.1321\nEpoch 9/200\n4/4 [==============================] - 0s 4ms/step - loss: 4.2715 - accuracy: 0.1321\nEpoch 10/200\n4/4 [==============================] - 0s 4ms/step - loss: 4.2688 - accuracy: 0.1415\nEpoch 11/200\n4/4 [==============================] - 0s 4ms/step - loss: 4.2659 - accuracy: 0.1509\nEpoch 12/200\n4/4 [==============================] - 0s 4ms/step - loss: 4.2627 - accuracy: 0.1509\nEpoch 13/200\n4/4 [==============================] - 0s 4ms/step - loss: 4.2594 - accuracy: 0.1604\nEpoch 14/200\n4/4 [==============================] - 0s 3ms/step - loss: 4.2558 - accuracy: 0.1604\nEpoch 15/200\n4/4 [==============================] - 0s 3ms/step - loss: 4.2518 - accuracy: 0.1698\nEpoch 16/200\n4/4 [==============================] - 0s 3ms/step - loss: 4.2474 - accuracy: 0.1792\nEpoch 17/200\n4/4 [==============================] - 0s 3ms/step - loss: 4.2429 - accuracy: 0.1792\nEpoch 18/200\n4/4 [==============================] - 0s 3ms/step - loss: 4.2376 - accuracy: 0.1792\nEpoch 19/200\n4/4 [==============================] - 0s 4ms/step - loss: 4.2320 - accuracy: 0.1792\nEpoch 20/200\n4/4 [==============================] - 0s 3ms/step - loss: 4.2258 - accuracy: 0.1792\nEpoch 21/200\n4/4 [==============================] - 0s 3ms/step - loss: 4.2192 - accuracy: 0.1792\nEpoch 22/200\n4/4 [==============================] - 0s 3ms/step - loss: 4.2117 - accuracy: 0.1792\nEpoch 23/200\n4/4 [==============================] - 0s 3ms/step - loss: 4.2035 - accuracy: 0.1792\nEpoch 24/200\n4/4 [==============================] - 0s 3ms/step - loss: 4.1950 - accuracy: 0.1887\nEpoch 25/200\n4/4 [==============================] - 0s 3ms/step - loss: 4.1852 - accuracy: 0.1981\nEpoch 26/200\n4/4 [==============================] - 0s 3ms/step - loss: 4.1741 - accuracy: 0.2075\nEpoch 27/200\n4/4 [==============================] - 0s 3ms/step - loss: 4.1623 - accuracy: 0.2075\nEpoch 28/200\n4/4 [==============================] - 0s 4ms/step - loss: 4.1493 - accuracy: 0.2170\nEpoch 29/200\n4/4 [==============================] - 0s 3ms/step - loss: 4.1348 - accuracy: 0.2170\nEpoch 30/200\n4/4 [==============================] - 0s 3ms/step - loss: 4.1187 - accuracy: 0.2075\nEpoch 31/200\n4/4 [==============================] - 0s 3ms/step - loss: 4.1011 - accuracy: 0.2075\nEpoch 32/200\n4/4 [==============================] - 0s 3ms/step - loss: 4.0826 - accuracy: 0.2358\nEpoch 33/200\n4/4 [==============================] - 0s 3ms/step - loss: 4.0620 - accuracy: 0.2358\nEpoch 34/200\n4/4 [==============================] - 0s 3ms/step - loss: 4.0393 - accuracy: 0.2358\nEpoch 35/200\n4/4 [==============================] - 0s 3ms/step - loss: 4.0145 - accuracy: 0.2358\nEpoch 36/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.9884 - accuracy: 0.2358\nEpoch 37/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.9582 - accuracy: 0.2453\nEpoch 38/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.9288 - accuracy: 0.2453\nEpoch 39/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.8944 - accuracy: 0.2453\nEpoch 40/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.8606 - accuracy: 0.2547\nEpoch 41/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.8240 - accuracy: 0.2453\nEpoch 42/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.7879 - accuracy: 0.2453\nEpoch 43/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.7475 - accuracy: 0.2453\nEpoch 44/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.7091 - accuracy: 0.2453\nEpoch 45/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.6689 - accuracy: 0.2264\nEpoch 46/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.6280 - accuracy: 0.2264\nEpoch 47/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.5880 - accuracy: 0.2453\nEpoch 48/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.5469 - accuracy: 0.2453\nEpoch 49/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.5084 - accuracy: 0.2736\nEpoch 50/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.4702 - accuracy: 0.2736\nEpoch 51/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.4329 - accuracy: 0.2736\nEpoch 52/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.3956 - accuracy: 0.2736\nEpoch 53/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.3592 - accuracy: 0.2736\nEpoch 54/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.3238 - accuracy: 0.2736\nEpoch 55/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.2881 - accuracy: 0.2736\nEpoch 56/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.2515 - accuracy: 0.2830\nEpoch 57/200\n4/4 [==============================] - 0s 4ms/step - loss: 3.2160 - accuracy: 0.2925\nEpoch 58/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.1796 - accuracy: 0.3113\nEpoch 59/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.1438 - accuracy: 0.3113\nEpoch 60/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.1079 - accuracy: 0.3113\nEpoch 61/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.0710 - accuracy: 0.3113\nEpoch 62/200\n4/4 [==============================] - 0s 3ms/step - loss: 3.0357 - accuracy: 0.3113\nEpoch 63/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.9990 - accuracy: 0.3208\nEpoch 64/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.9628 - accuracy: 0.3208\nEpoch 65/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.9256 - accuracy: 0.3396\nEpoch 66/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.8888 - accuracy: 0.3396\nEpoch 67/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.8522 - accuracy: 0.3491\nEpoch 68/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.8155 - accuracy: 0.3491\nEpoch 69/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.7791 - accuracy: 0.3585\nEpoch 70/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.7420 - accuracy: 0.3774\nEpoch 71/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.7060 - accuracy: 0.3868\nEpoch 72/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.6684 - accuracy: 0.4151\nEpoch 73/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.6320 - accuracy: 0.4245\nEpoch 74/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.5954 - accuracy: 0.4340\nEpoch 75/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.5576 - accuracy: 0.4340\nEpoch 76/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.5190 - accuracy: 0.4528\nEpoch 77/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.4835 - accuracy: 0.4717\nEpoch 78/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.4460 - accuracy: 0.4811\nEpoch 79/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.4078 - accuracy: 0.4906\nEpoch 80/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.3692 - accuracy: 0.5000\nEpoch 81/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.3323 - accuracy: 0.5000\nEpoch 82/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.2948 - accuracy: 0.5000\nEpoch 83/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.2560 - accuracy: 0.5094\nEpoch 84/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.2195 - accuracy: 0.5189\nEpoch 85/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.1820 - accuracy: 0.5472\nEpoch 86/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.1456 - accuracy: 0.5472\nEpoch 87/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.1089 - accuracy: 0.5472\nEpoch 88/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.0720 - accuracy: 0.5660\nEpoch 89/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.0357 - accuracy: 0.5660\nEpoch 90/200\n4/4 [==============================] - 0s 3ms/step - loss: 2.0004 - accuracy: 0.5660\nEpoch 91/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.9643 - accuracy: 0.5755\nEpoch 92/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.9287 - accuracy: 0.5849\nEpoch 93/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.8934 - accuracy: 0.5849\nEpoch 94/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.8582 - accuracy: 0.6038\nEpoch 95/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.8249 - accuracy: 0.6038\nEpoch 96/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.7904 - accuracy: 0.6038\nEpoch 97/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.7571 - accuracy: 0.6132\nEpoch 98/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.7235 - accuracy: 0.6226\nEpoch 99/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.6913 - accuracy: 0.6226\nEpoch 100/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.6592 - accuracy: 0.6321\nEpoch 101/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.6277 - accuracy: 0.6321\nEpoch 102/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.5970 - accuracy: 0.6415\nEpoch 103/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.5673 - accuracy: 0.6509\nEpoch 104/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.5375 - accuracy: 0.6509\nEpoch 105/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.5083 - accuracy: 0.6604\nEpoch 106/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.4809 - accuracy: 0.6698\nEpoch 107/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.4536 - accuracy: 0.6698\nEpoch 108/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.4275 - accuracy: 0.6604\nEpoch 109/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.4027 - accuracy: 0.6698\nEpoch 110/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.3773 - accuracy: 0.6887\nEpoch 111/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.3529 - accuracy: 0.6887\nEpoch 112/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.3293 - accuracy: 0.6792\nEpoch 113/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.3067 - accuracy: 0.6887\nEpoch 114/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.2844 - accuracy: 0.6887\nEpoch 115/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.2626 - accuracy: 0.6981\nEpoch 116/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.2419 - accuracy: 0.6981\nEpoch 117/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.2210 - accuracy: 0.6981\nEpoch 118/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.2027 - accuracy: 0.6981\nEpoch 119/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.1828 - accuracy: 0.6887\nEpoch 120/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.1642 - accuracy: 0.6981\nEpoch 121/200\n4/4 [==============================] - 0s 4ms/step - loss: 1.1463 - accuracy: 0.6981\nEpoch 122/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.1279 - accuracy: 0.6981\nEpoch 123/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.1115 - accuracy: 0.6981\nEpoch 124/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.0959 - accuracy: 0.6981\nEpoch 125/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.0788 - accuracy: 0.6981\nEpoch 126/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.0629 - accuracy: 0.6981\nEpoch 127/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.0488 - accuracy: 0.6981\nEpoch 128/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.0340 - accuracy: 0.6981\nEpoch 129/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.0206 - accuracy: 0.6887\nEpoch 130/200\n4/4 [==============================] - 0s 3ms/step - loss: 1.0076 - accuracy: 0.6981\nEpoch 131/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.9952 - accuracy: 0.6981\nEpoch 132/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.9818 - accuracy: 0.6981\nEpoch 133/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.9715 - accuracy: 0.6981\nEpoch 134/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.9609 - accuracy: 0.6981\nEpoch 135/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.9500 - accuracy: 0.6981\nEpoch 136/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.9392 - accuracy: 0.6981\nEpoch 137/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.9290 - accuracy: 0.7075\nEpoch 138/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.9196 - accuracy: 0.7075\nEpoch 139/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.9107 - accuracy: 0.7170\nEpoch 140/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.9016 - accuracy: 0.7075\nEpoch 141/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.8925 - accuracy: 0.6981\nEpoch 142/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.8848 - accuracy: 0.7075\nEpoch 143/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.8773 - accuracy: 0.7075\nEpoch 144/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.8694 - accuracy: 0.7075\nEpoch 145/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.8621 - accuracy: 0.7170\nEpoch 146/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.8546 - accuracy: 0.7075\nEpoch 147/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.8478 - accuracy: 0.6981\nEpoch 148/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.8413 - accuracy: 0.7075\nEpoch 149/200\n4/4 [==============================] - ETA: 0s - loss: 0.7389 - accuracy: 0.78 - 0s 3ms/step - loss: 0.8346 - accuracy: 0.6981\nEpoch 150/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.8288 - accuracy: 0.7075\nEpoch 151/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.8233 - accuracy: 0.7075\nEpoch 152/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.8166 - accuracy: 0.7075\nEpoch 153/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.8118 - accuracy: 0.7075\nEpoch 154/200\n4/4 [==============================] - ETA: 0s - loss: 0.9753 - accuracy: 0.62 - 0s 3ms/step - loss: 0.8061 - accuracy: 0.7170\nEpoch 155/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.8010 - accuracy: 0.7075\nEpoch 156/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.7958 - accuracy: 0.7075\nEpoch 157/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.7911 - accuracy: 0.7075\nEpoch 158/200\n4/4 [==============================] - 0s 4ms/step - loss: 0.7864 - accuracy: 0.7075\nEpoch 159/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.7822 - accuracy: 0.7075\nEpoch 160/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.7776 - accuracy: 0.7075\nEpoch 161/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.7731 - accuracy: 0.7075\nEpoch 162/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.7689 - accuracy: 0.7075\nEpoch 163/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.7651 - accuracy: 0.7075\nEpoch 164/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.7609 - accuracy: 0.7075\nEpoch 165/200\n4/4 [==============================] - 0s 3ms/step - loss: 0.7572 - accuracy: 0.7075\nEpoch 166/200\n"
],
[
"\npre = model.predict_classes(X)\n\nprint (tokens.sequences_to_texts([pre]))",
"['diez cañones por su viento y la a un vela no corta el mar sino vuela un velero bergantín bajel pirata que ni por su bravura el mar en la mar sino del uno al otro confín la lona en la mar sino en la lona gime el mar y allá en la movimiento olas de plata y allá y allá el mar pirata que alegre en la lona a a un velero al otro confín y allá a un bravura estambul navega velero bergantín sin temor que ni a navío ni a ni a tu valor a un alcanza ni a un tu valor']\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c1d042270ec860de4aa5015455e264cff03219 | 848,386 | ipynb | Jupyter Notebook | Bexar_Records_First_Work.ipynb | ldhagen/Bexar_Records | b7091bc9f62a6607a86351648234cb4963292ccb | [
"MIT"
] | null | null | null | Bexar_Records_First_Work.ipynb | ldhagen/Bexar_Records | b7091bc9f62a6607a86351648234cb4963292ccb | [
"MIT"
] | null | null | null | Bexar_Records_First_Work.ipynb | ldhagen/Bexar_Records | b7091bc9f62a6607a86351648234cb4963292ccb | [
"MIT"
] | null | null | null | 43.973773 | 485 | 0.326778 | [
[
[
"Work looking at https://www.bexar.org/2988/Online-District-Clerk-Criminal-Records",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport datetime\n%matplotlib inline",
"_____no_output_____"
],
[
"Bexar_Criminal_AB_df = pd.read_csv(r'http://edocs.bexar.org/cc/DC_cjjorad_a_b.csv',header=0)",
"/usr/lib/python3/dist-packages/IPython/core/interactiveshell.py:3062: DtypeWarning: Columns (18) have mixed types.Specify dtype option on import or set low_memory=False.\n has_raised = await self.run_ast_nodes(code_ast.body, cell_name,\n"
],
[
"Bexar_Criminal_C_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_c.csv',header=0)",
"_____no_output_____"
],
[
"Bexar_Criminal_DF_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_d_f.csv',header=0)",
"_____no_output_____"
],
[
"Bexar_Criminal_G_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_g.csv',header=0) ",
"_____no_output_____"
],
[
"Bexar_Criminal_HK_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_h_k.csv',header=0)",
"_____no_output_____"
],
[
"Bexar_Criminal_L_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_l.csv',header=0)",
"_____no_output_____"
],
[
"Bexar_Criminal_M_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_m.csv',header=0)",
"_____no_output_____"
],
[
"Bexar_Criminal_NQ_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_n_q.csv',header=0)",
"_____no_output_____"
],
[
"Bexar_Criminal_R_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_r.csv',header=0)",
"_____no_output_____"
],
[
"Bexar_Criminal_S_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_s.csv',header=0)",
"_____no_output_____"
],
[
"Bexar_Criminal_TV_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_t_v.csv',header=0)",
"_____no_output_____"
],
[
"Bexar_Criminal_WZ_df = pd.read_csv('http://edocs.bexar.org/cc/DC_cjjorad_w_z.csv',header=0)",
"_____no_output_____"
],
[
"dir()",
"_____no_output_____"
],
[
"pd.set_option('display.max_columns', None)",
"_____no_output_____"
],
[
"combined_list = [Bexar_Criminal_AB_df, Bexar_Criminal_C_df, Bexar_Criminal_DF_df, Bexar_Criminal_G_df, Bexar_Criminal_HK_df, Bexar_Criminal_L_df, Bexar_Criminal_M_df, Bexar_Criminal_NQ_df, Bexar_Criminal_R_df, Bexar_Criminal_S_df, Bexar_Criminal_TV_df, Bexar_Criminal_WZ_df]",
"_____no_output_____"
],
[
"combined_df = pd.concat(combined_list, axis=0, ignore_index=True)",
"_____no_output_____"
],
[
"combined_df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 391942 entries, 0 to 391941\nData columns (total 65 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 CASE-CAUSE-NBR 391942 non-null object \n 1 FULL-NAME 391942 non-null object \n 2 ALIAS 391942 non-null object \n 3 SEX 391942 non-null object \n 4 RACE 391942 non-null object \n 5 BIRTHDATE 391942 non-null object \n 6 SID 391942 non-null int64 \n 7 JUDICIAL-NBR 391942 non-null int64 \n 8 ADDR-HOUSE-NBR 391942 non-null object \n 9 HOUSE-SUF 391942 non-null object \n 10 ADDR-PRE-DIRECTION 391942 non-null object \n 11 ADDR-STREET 391942 non-null object \n 12 ADDR-STREET-SUFFIX 391942 non-null object \n 13 ADDR-POST-DIRECTION 391942 non-null object \n 14 ADDR-UNIT 391942 non-null object \n 15 ADDR-CITY 391942 non-null object \n 16 ADDR-STATE 391942 non-null object \n 17 ADDR-ZIP-CODE 391942 non-null int64 \n 18 ADDR-ZIP-PLUS-4 391942 non-null object \n 19 OFFENSE-DATE 391942 non-null object \n 20 OFFENSE-CODE 391942 non-null int64 \n 21 OFFENSE-DESC 391942 non-null object \n 22 OFFENSE-TYPE 391942 non-null object \n 23 REDUCED-OFFENSE-CODE 391942 non-null int64 \n 24 REDUCED-OFFENSE-DESC 391942 non-null object \n 25 REDUCED-OFFENSE-TYPE 391942 non-null object \n 26 LOCATION 391942 non-null object \n 27 CUSTODY-DATE 391942 non-null object \n 28 COMPLAINT-DATE 391942 non-null int64 \n 29 FILING-AGENCY-DESCRIPTION 391942 non-null object \n 30 CASE-DATE 391942 non-null object \n 31 CASE-DESC 391942 non-null object \n 32 SETTING-DATE 391942 non-null object \n 33 SETTING-TYPE 391942 non-null object \n 34 G-JURY-DATE 391942 non-null object \n 35 G-JURY-STATUS 391942 non-null object \n 36 DISPOSITION-DATE 391942 non-null object \n 37 DISPOSITION-CODE 391942 non-null int64 \n 38 DISPOSITION-DESC 391942 non-null object \n 39 JUDGEMENT-DATE 391942 non-null object \n 40 JUDGEMENT-CODE 391942 non-null int64 \n 41 JUDGEMENT-DESC 391942 non-null object \n 42 SENTENCE-DESC 391942 non-null object \n 43 SENTENCE 391942 non-null int64 \n 44 SENTENCE-START-DATE 391942 non-null object \n 45 SENTENCE-END-DATE 391942 non-null object \n 46 FINE-AMOUNT 391942 non-null float64\n 47 COURT-COSTS 391942 non-null float64\n 48 COURT-TYPE 391942 non-null object \n 49 COURT 391942 non-null object \n 50 POST-JUDICIAL-FIELD 391942 non-null object \n 51 POST-JUDICIAL-DATE 391942 non-null object \n 52 BOND-DATE 391942 non-null object \n 53 BOND-STATUS 391942 non-null object \n 54 BOND-AMOUNT 391942 non-null float64\n 55 BONDSMAN-NAME 391942 non-null object \n 56 ATTORNEY 391942 non-null object \n 57 ATTORNEY-BAR-NBR 391942 non-null int64 \n 58 ATTORNEY-APPOINTED-RETAINED 391942 non-null object \n 59 INTAKE-PROSECUTOR 391942 non-null object \n 60 OUTTAKE-PROSECUTOR 391942 non-null object \n 61 PROBATION-PROSECUTOR 391942 non-null object \n 62 REVOKATION-PROSECUTOR 391942 non-null object \n 63 ORIGINAL-SENTENCE 391942 non-null object \n 64 Unnamed: 64 0 non-null float64\ndtypes: float64(4), int64(10), object(51)\nmemory usage: 194.4+ MB\n"
],
[
"combined_df.loc[:, 'BIRTHDATE_dt'] = pd.to_datetime(combined_df['BIRTHDATE'], errors='coerce')",
"_____no_output_____"
],
[
"help(pd.to_datetime)",
"Help on function to_datetime in module pandas.core.tools.datetimes:\n\nto_datetime(arg: Union[~DatetimeScalar, List, Tuple, ~ArrayLike, ForwardRef('Series')], errors: str = 'raise', dayfirst: bool = False, yearfirst: bool = False, utc: Union[bool, NoneType] = None, format: Union[str, NoneType] = None, exact: bool = True, unit: Union[str, NoneType] = None, infer_datetime_format: bool = False, origin='unix', cache: bool = True) -> Union[pandas.core.indexes.datetimes.DatetimeIndex, ForwardRef('Series'), ~DatetimeScalar, ForwardRef('NaTType')]\n Convert argument to datetime.\n \n Parameters\n ----------\n arg : int, float, str, datetime, list, tuple, 1-d array, Series, DataFrame/dict-like\n The object to convert to a datetime.\n errors : {'ignore', 'raise', 'coerce'}, default 'raise'\n - If 'raise', then invalid parsing will raise an exception.\n - If 'coerce', then invalid parsing will be set as NaT.\n - If 'ignore', then invalid parsing will return the input.\n dayfirst : bool, default False\n Specify a date parse order if `arg` is str or its list-likes.\n If True, parses dates with the day first, eg 10/11/12 is parsed as\n 2012-11-10.\n Warning: dayfirst=True is not strict, but will prefer to parse\n with day first (this is a known bug, based on dateutil behavior).\n yearfirst : bool, default False\n Specify a date parse order if `arg` is str or its list-likes.\n \n - If True parses dates with the year first, eg 10/11/12 is parsed as\n 2010-11-12.\n - If both dayfirst and yearfirst are True, yearfirst is preceded (same\n as dateutil).\n \n Warning: yearfirst=True is not strict, but will prefer to parse\n with year first (this is a known bug, based on dateutil behavior).\n utc : bool, default None\n Return UTC DatetimeIndex if True (converting any tz-aware\n datetime.datetime objects as well).\n format : str, default None\n The strftime to parse time, eg \"%d/%m/%Y\", note that \"%f\" will parse\n all the way up to nanoseconds.\n See strftime documentation for more information on choices:\n https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior.\n exact : bool, True by default\n Behaves as:\n - If True, require an exact format match.\n - If False, allow the format to match anywhere in the target string.\n \n unit : str, default 'ns'\n The unit of the arg (D,s,ms,us,ns) denote the unit, which is an\n integer or float number. This will be based off the origin.\n Example, with unit='ms' and origin='unix' (the default), this\n would calculate the number of milliseconds to the unix epoch start.\n infer_datetime_format : bool, default False\n If True and no `format` is given, attempt to infer the format of the\n datetime strings based on the first non-NaN element,\n and if it can be inferred, switch to a faster method of parsing them.\n In some cases this can increase the parsing speed by ~5-10x.\n origin : scalar, default 'unix'\n Define the reference date. The numeric values would be parsed as number\n of units (defined by `unit`) since this reference date.\n \n - If 'unix' (or POSIX) time; origin is set to 1970-01-01.\n - If 'julian', unit must be 'D', and origin is set to beginning of\n Julian Calendar. Julian day number 0 is assigned to the day starting\n at noon on January 1, 4713 BC.\n - If Timestamp convertible, origin is set to Timestamp identified by\n origin.\n cache : bool, default True\n If True, use a cache of unique, converted dates to apply the datetime\n conversion. May produce significant speed-up when parsing duplicate\n date strings, especially ones with timezone offsets. The cache is only\n used when there are at least 50 values. The presence of out-of-bounds\n values will render the cache unusable and may slow down parsing.\n \n .. versionchanged:: 0.25.0\n - changed default value from False to True.\n \n Returns\n -------\n datetime\n If parsing succeeded.\n Return type depends on input:\n \n - list-like: DatetimeIndex\n - Series: Series of datetime64 dtype\n - scalar: Timestamp\n \n In case when it is not possible to return designated types (e.g. when\n any element of input is before Timestamp.min or after Timestamp.max)\n return will have datetime.datetime type (or corresponding\n array/Series).\n \n See Also\n --------\n DataFrame.astype : Cast argument to a specified dtype.\n to_timedelta : Convert argument to timedelta.\n convert_dtypes : Convert dtypes.\n \n Examples\n --------\n Assembling a datetime from multiple columns of a DataFrame. The keys can be\n common abbreviations like ['year', 'month', 'day', 'minute', 'second',\n 'ms', 'us', 'ns']) or plurals of the same\n \n >>> df = pd.DataFrame({'year': [2015, 2016],\n ... 'month': [2, 3],\n ... 'day': [4, 5]})\n >>> pd.to_datetime(df)\n 0 2015-02-04\n 1 2016-03-05\n dtype: datetime64[ns]\n \n If a date does not meet the `timestamp limitations\n <https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html\n #timeseries-timestamp-limits>`_, passing errors='ignore'\n will return the original input instead of raising any exception.\n \n Passing errors='coerce' will force an out-of-bounds date to NaT,\n in addition to forcing non-dates (or non-parseable dates) to NaT.\n \n >>> pd.to_datetime('13000101', format='%Y%m%d', errors='ignore')\n datetime.datetime(1300, 1, 1, 0, 0)\n >>> pd.to_datetime('13000101', format='%Y%m%d', errors='coerce')\n NaT\n \n Passing infer_datetime_format=True can often-times speedup a parsing\n if its not an ISO8601 format exactly, but in a regular format.\n \n >>> s = pd.Series(['3/11/2000', '3/12/2000', '3/13/2000'] * 1000)\n >>> s.head()\n 0 3/11/2000\n 1 3/12/2000\n 2 3/13/2000\n 3 3/11/2000\n 4 3/12/2000\n dtype: object\n \n >>> %timeit pd.to_datetime(s, infer_datetime_format=True) # doctest: +SKIP\n 100 loops, best of 3: 10.4 ms per loop\n \n >>> %timeit pd.to_datetime(s, infer_datetime_format=False) # doctest: +SKIP\n 1 loop, best of 3: 471 ms per loop\n \n Using a unix epoch time\n \n >>> pd.to_datetime(1490195805, unit='s')\n Timestamp('2017-03-22 15:16:45')\n >>> pd.to_datetime(1490195805433502912, unit='ns')\n Timestamp('2017-03-22 15:16:45.433502912')\n \n .. warning:: For float arg, precision rounding might happen. To prevent\n unexpected behavior use a fixed-width exact type.\n \n Using a non-unix epoch origin\n \n >>> pd.to_datetime([1, 2, 3], unit='D',\n ... origin=pd.Timestamp('1960-01-01'))\n DatetimeIndex(['1960-01-02', '1960-01-03', '1960-01-04'], dtype='datetime64[ns]', freq=None)\n\n"
],
[
"combined_df.BIRTHDATE_dt.min()",
"_____no_output_____"
],
[
"combined_df.BIRTHDATE_dt.max()",
"_____no_output_____"
],
[
"combined_df[combined_df.BIRTHDATE_dt == '1965']",
"_____no_output_____"
],
[
"combined_df[(combined_df['ADDR-ZIP-CODE'] == 78230) & (combined_df['BIRTHDATE_dt'] > '1980')]",
"_____no_output_____"
],
[
"combined_df.columns",
"_____no_output_____"
],
[
"combined_df[combined_df['ADDR-STREET'].str.contains('TERRACE PLACE')]",
"_____no_output_____"
]
],
[
[
"https://search.bexar.org/Case/CaseDetail?r=5ae98979-5190-4b60-b939-cb5ddef960cd&cs=2007CR11191W&ct=&&p=1_2007CR11191W+++D1871272998100000",
"_____no_output_____"
]
],
[
[
"combined_df.loc[:, 'OFFENSE-DATE_dt'] = pd.to_datetime(combined_df['OFFENSE-DATE'], errors='coerce')",
"_____no_output_____"
]
],
[
[
"https://www.krem.com/article/news/crime/police-say-guns-drugs-pipe-bombs-found-in-north-san-antonio-home-2-arrested/273-073a5835-3afc-4a01-9742-234a84bfed85",
"_____no_output_____"
]
],
[
[
"combined_df[combined_df['FULL-NAME'].str.contains('HOTTLE')]",
"_____no_output_____"
],
[
"combined_df[(combined_df['OFFENSE-DATE_dt'] =='2019-07-17')]",
"_____no_output_____"
],
[
"combined_df[combined_df['FULL-NAME'].str.contains('FERRILL')]",
"_____no_output_____"
],
[
"combined_df[(combined_df['BIRTHDATE_dt'] == '1997-09-29')]",
"_____no_output_____"
],
[
"combined_df[combined_df['FULL-NAME'].str.contains('LEDBETTER')]",
"_____no_output_____"
],
[
"combined_df[(combined_df['OFFENSE-DATE_dt'] =='2008-07-26')]",
"_____no_output_____"
]
],
[
[
"https://search.bexar.org/Case/CaseDetail?r=103d31d3-30d0-4a6d-bb7e-3a67205e3f52&cs=2009CR2391A&ct=&=&full=y&p=1_2009CR2391A%20%20%20%20D4371336448100000#events",
"_____no_output_____"
]
],
[
[
"x = combined_df[(combined_df['BIRTHDATE_dt'] == '1985-10-25') & (combined_df['FULL-NAME'].str.contains('ROBLES'))]",
"_____no_output_____"
],
[
"x.sort_values(by=['OFFENSE-DATE_dt'])",
"_____no_output_____"
],
[
"combined_df['OFFENSE-DATE_dt'].max()",
"_____no_output_____"
],
[
"len(combined_df)",
"_____no_output_____"
],
[
"combined_df[combined_df['OFFENSE-DATE_dt'] > '2021-8-1']",
"_____no_output_____"
],
[
"combined_df[combined_df['FULL-NAME'].str.contains('NICEFORO')]",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c1dbec36a8f99e608aecc3bf093437caee7a82 | 27,749 | ipynb | Jupyter Notebook | lessons/misc/sql-intro/r-sql.ipynb | UAAppComp/studyGroup | 642eb769cb2abdce5de3f2f10dd12164ac0dd052 | [
"Apache-2.0"
] | 105 | 2015-06-22T15:23:19.000Z | 2022-03-30T12:20:09.000Z | lessons/misc/sql-intro/r-sql.ipynb | UAAppComp/studyGroup | 642eb769cb2abdce5de3f2f10dd12164ac0dd052 | [
"Apache-2.0"
] | 314 | 2015-06-18T22:10:34.000Z | 2022-02-09T16:47:52.000Z | lessons/misc/sql-intro/r-sql.ipynb | UAAppComp/studyGroup | 642eb769cb2abdce5de3f2f10dd12164ac0dd052 | [
"Apache-2.0"
] | 142 | 2015-06-18T22:11:53.000Z | 2022-02-03T16:14:43.000Z | 44.256778 | 359 | 0.361995 | [
[
[
"# Introduction to Databases and SQL\n\n- **Authors**: Ian Dennis Miller\n- **Research field**: Social Psychology - Social Complexity\n- **Lesson topic**: Databases and SQL\n- **Lesson content URL**: <https://github.com/UofTCoders/studyGroup/tree/gh-pages/lessons/misc/sql-intro>",
"_____no_output_____"
],
[
"# Plan\n\n- get a data set from R: mtcars\n- select from it using SQL (via sqldf)\n- export it as CSV file\n- in sqlite, create database and import CSV\n- use RSQLite connection to select direct from database",
"_____no_output_____"
],
[
"# get a data set from R: mtcars",
"_____no_output_____"
],
[
"The data contain Motor Trends car testing results. There are 32 cars.",
"_____no_output_____"
]
],
[
[
"nrow(mtcars)\nncol(mtcars)",
"_____no_output_____"
],
[
"head(mtcars)",
"_____no_output_____"
],
[
"summary(mtcars)",
"_____no_output_____"
]
],
[
[
"# use SQL to select cars with 6-cylinder engines",
"_____no_output_____"
]
],
[
[
"library(sqldf)\n# help(sqldf)",
"_____no_output_____"
],
[
"sqldf('select * from mtcars where cyl=6')",
"_____no_output_____"
]
],
[
[
"## There are 7 cars matching the query.",
"_____no_output_____"
]
],
[
[
"nrow(sqldf('select * from mtcars where cyl=6'))",
"_____no_output_____"
]
],
[
[
"## We can obtain the same result directly with SQL... but let's not get ahead of ourselves.",
"_____no_output_____"
]
],
[
[
"sqldf('select count(*) from mtcars where cyl=6')",
"_____no_output_____"
]
],
[
[
"# write mtcars dataset to the file system",
"_____no_output_____"
]
],
[
[
"df = mtcars\ndf$name = rownames(df) # move R's rownames into their own column\nwrite.table(df, \"mtcars.csv\", quote=TRUE, row.names=FALSE, col.names=FALSE, sep=\",\")",
"_____no_output_____"
]
],
[
[
"# create sqlite database in terminal\n\n make mtcars-init\n\nThat will run the following:\n\n sqlite3 mtcars.sqlite < mtcars-init.sql\n\nThat SQL file does the following:\n\n- drop a table called results\n- create a table called results with columns for the mtcars data\n- tell sqlite to load a CSV file\n- load the CSV file into the results table",
"_____no_output_____"
],
[
"# access sqlite from R",
"_____no_output_____"
]
],
[
[
"library(RSQLite)\n\n# connect to the database\ndb = dbConnect(SQLite(), dbname=\"mtcars.sqlite\")",
"_____no_output_____"
]
],
[
[
"## list the tables that are available",
"_____no_output_____"
]
],
[
[
"dbListTables(db)",
"_____no_output_____"
]
],
[
[
"## Inspect columns in the results table",
"_____no_output_____"
]
],
[
[
"dbListFields(db, \"results\")",
"_____no_output_____"
]
],
[
[
"# use SQL to select cars with 8-cylinder engines - directly from database",
"_____no_output_____"
]
],
[
[
"dbGetQuery(conn = db, \"select * from results where cyl=8\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c1dde6c656a73aad28e1ade2382609f9a58a37 | 6,777 | ipynb | Jupyter Notebook | README.ipynb | jonasweiss/AutoDot | 327d1dd5c377b418d90fc7c58e7c6d7c3cb8909d | [
"MIT"
] | null | null | null | README.ipynb | jonasweiss/AutoDot | 327d1dd5c377b418d90fc7c58e7c6d7c3cb8909d | [
"MIT"
] | null | null | null | README.ipynb | jonasweiss/AutoDot | 327d1dd5c377b418d90fc7c58e7c6d7c3cb8909d | [
"MIT"
] | null | null | null | 50.574627 | 801 | 0.666372 | [
[
[
"# Quantum device tuning via hypersurface sampling\n**NOTE: DUE TO MULTIPROCESSING PACKAGE THE CURRENT IMPLEMENTATION ONLY WORKS ON UNIX/LINUX OPERATING SYSTEMS [TO RUN ON WINDOWS FOLLOW THIS GUIDE](Resources/Running_on_windows.ipynb)**\n\nQuantum devices used to implement spin qubits in semiconductors are challenging to tune and characterise. Often the best approaches to tuning such devices is manual tuning or a simple heuristic algorithm which is not flexible across devices. This repository contains the statistical tuning approach detailed in https://arxiv.org/abs/2001.02589 with some additional functionality, a quick animated explanation of the approach detailed in the paper is available [here](Resources/Algorithm_overview/README.ipynb). This approach is promising as it make few assumptions about the device being tuned and hence can be applied to many systems without alteration. **For instructions on how to run a simple fake environment to see how the algorithm works see [this README.md](Playground/README.ipynb)**\n\n## Dependencies\nThe required packages required to run the algorithm are:\n```\nscikit-image\nscipy\nnumpy\nmatplotlib\nGPy\nmkl\npyDOE\n```\n# Using the algorithm\nUsing the algorithm varies depending on what measurement software you use in your lab or what you want to achieve. Specifically if your lab utilises pygor then you should call a different function to initiate the tuning. If you are unable to access a lab then you can still create a virtual environment to test the algorithm in using the Playground module. Below is documentation detailing how to run the algorithm for each of these situations.\n## Without pygor\nTo use the algorithm without pygor you must create the following:\n- jump\n- measure\n- check\n- config_file\n\nBelow is an **EXAMPLE** of how jump, check, and measure **COULD** be defined for a 5 gate device with 2 investigation (in this case plunger) gates.\n\n<ins>jump:</ins>\nJump should be a function that takes an array of values and sets them to the device. It should also accept a flag that details whether the investigation gates (typically plunger gates) should be used. \n```python\ndef jump(params,inv=False):\n if inv:\n labels = [\"dac4\",\"dac6\"] #plunger gates\n else:\n labels = [\"dac3\",\"dac4\",\"dac5\",\"dac6\",\"dac7\"] #all gates\n \n assert len(params) == len(labels) #params needs to be the same length as labels\n for i in range(len(params)):\n set_value_to_dac(labels[i],params[i]) #function that takes dac key and value and sets dac to that value\n return params\n```\n<ins>measure:</ins>\nmeasure should be a function that returns the measured current on the device.\n```python\ndef measure():\n current = get_value_from_daq() #receive a single current measurement from the daq\n return current\n```\n<ins>check:</ins>\ncheck should be a function that returns the state of all relevant dac channels.\n```python\ndef check(inv=True):\n if inv:\n labels = [\"dac4\",\"dac6\"] #plunger gates\n else:\n labels = [\"dac3\",\"dac4\",\"dac5\",\"dac6\",\"dac7\"] #all gates\n dac_state = [None]*len(labels)\n for i in range(len(labels)):\n dac_state[i] = get_current_dac_state(labels[i]) #function that takes dac key and returns state that channel is in\n return dac_state\n```\n<ins>config_file:</ins>\nconfig_file should be a string that specifies the file path of a .json file containing a json object that specifies the desired settings the user wants to use for tuning. An example string would be \"config.json\". For information on what the config file should contain see the json config section.\n\n### How to run\nTo run tuning without pygor once the above has been defined call the following:\n```python\nfrom AutoDot.tune import tune_from_file\ntune_from_file(jump,measure,check,config_file)\n```\n## With pygor\nTo use the algorithm without pygor you must create the following:\n\n<ins>config_file:</ins>\nconfig_file should be a string that specifies the file path of a .json file containing a json object that specifies the desired settings the user wants to use for tuning. An example string would be \"config.json\". For information on what the config file should contain see the json config section. Additional fields are required to specify pygor location and setup.\n### How to run\nTo run tuning with pygor once the above has been defined call the following:\n```python\nfrom AutoDot.tune import tune_with_pygor_from_file\ntune_with_pygor_from_file(config_file)\n```\n## With playground (environment)\nTo use the algorithm using the playground you must create the following:\n\n<ins>config_file:</ins>\nconfig_file should be a string that specifies the file path of a .json file containing a json object that specifies the desired settings the user wants to use for tuning. An example string would be \"config.json\". \n\nThe config you must supply the field \"playground\" then in this field you must specify the basic shapes you want to build your environment out of. Provided is a [demo config file](mock_device_demo_config.json) and a [README](Playground/README.ipynb) detailing how it works and what a typical run looks like.\n\n### How to run\nTo run tuning with pygor once the above has been defined call the following:\n```python\nfrom AutoDot.tune import tune_with_playground_from_file\ntune_with_playground_from_file(config_file)\n```\n",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown"
]
] |
d0c1e31923d7f13bc32138fc4fa4d5808d6c6a66 | 7,424 | ipynb | Jupyter Notebook | feature_selection/data_mining_hw3.ipynb | cccaaannn/machine_learning_colab | aff3ef813f4a1b48dd64d6a57428c2868f11c091 | [
"MIT"
] | null | null | null | feature_selection/data_mining_hw3.ipynb | cccaaannn/machine_learning_colab | aff3ef813f4a1b48dd64d6a57428c2868f11c091 | [
"MIT"
] | null | null | null | feature_selection/data_mining_hw3.ipynb | cccaaannn/machine_learning_colab | aff3ef813f4a1b48dd64d6a57428c2868f11c091 | [
"MIT"
] | null | null | null | 26.140845 | 264 | 0.462554 | [
[
[
"<a href=\"https://colab.research.google.com/github/cccaaannn/machine_learning_colab/blob/master/feature_selection/data_mining_hw3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"Feature selection methods",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
],
[
"download-unzip data",
"_____no_output_____"
]
],
[
[
"!wget https://archive.ics.uci.edu/ml/machine-learning-databases/00320/student.zip\n!unzip student.zip",
"_____no_output_____"
]
],
[
[
"imports",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nfrom scipy.stats import stats",
"_____no_output_____"
]
],
[
[
"load data",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv(\"student-mat.csv\", index_col=0, delimiter = \";\")",
"_____no_output_____"
]
],
[
[
"print head",
"_____no_output_____"
]
],
[
[
"df.head(5)",
"_____no_output_____"
]
],
[
[
"Options",
"_____no_output_____"
]
],
[
[
"p_value_threshold = 0.05",
"_____no_output_____"
]
],
[
[
"Pearson ",
"_____no_output_____"
]
],
[
[
"num_cols = df.drop([\"G3\",\"G2\",\"G1\"], axis=1)._get_numeric_data().columns\npearson_corrs = []\nfor col in num_cols:\n s = stats.pearsonr(df[col], df['G3'])\n if(s[1] < p_value_threshold):\n pearson_corrs.append((col, s[0], s[1]))\n\npearson_corrs = sorted(pearson_corrs, reverse=True, key=lambda tup: abs(tup[1]))\n\nfor i in pearson_corrs:\n print(\"columns: {0:<12} correlation: {1:<22} p-values: {2:<22}\".format(i[0], i[1], i[2]))",
"_____no_output_____"
]
],
[
[
"Spearman",
"_____no_output_____"
]
],
[
[
"num_cols = df.drop([\"G3\",\"G2\",\"G1\"], axis=1)._get_numeric_data().columns\nspearmar_corrs = []\nfor col in num_cols:\n s = stats.spearmanr(df[col], df['G3'])\n if(s[1] < p_value_threshold):\n spearmar_corrs.append((col, s[0], s[1]))\n\nspearmar_corrs = sorted(spearmar_corrs, reverse=True, key=lambda tup: abs(tup[1]))\n\nfor i in spearmar_corrs:\n print(\"columns: {0:<12} correlation: {1:<22} p-values: {2:<22}\".format(i[0], i[1], i[2]))",
"_____no_output_____"
]
],
[
[
"Kendall",
"_____no_output_____"
]
],
[
[
"num_cols = df.drop([\"G3\",\"G2\",\"G1\"], axis=1)._get_numeric_data().columns\nkendall_corrs = []\nfor col in num_cols:\n s = stats.kendalltau(df[col], df['G3'])\n if(s[1] < p_value_threshold):\n kendall_corrs.append((col, s[0], s[1]))\n \nkendall_corrs = sorted(kendall_corrs, reverse=True, key=lambda tup: abs(tup[1]))\n\nfor i in kendall_corrs:\n print(\"columns: {0:<12} correlation: {1:<22} p-values: {2:<22}\".format(i[0], i[1], i[2]))",
"_____no_output_____"
]
],
[
[
"f-value",
"_____no_output_____"
]
],
[
[
"cols = df.columns\nnum_cols = df._get_numeric_data().columns\n\ncategorical_cols = list(set(cols) - set(num_cols))\n\nf_value_corrs = []\nfor categorical_col in categorical_cols:\n groups = []\n column_categories = df[categorical_col].unique()\n for column_category in column_categories:\n groups.append(df[df[categorical_col] == column_category].age)\n\n f, p = stats.f_oneway(*groups)\n if(p < p_value_threshold):\n f_value_corrs.append((categorical_col, f, p, \", \".join(list(column_categories))))\n\n\n\nf_value_corrs = sorted(f_value_corrs, reverse=True, key=lambda tup: abs(tup[1]))\n\nfor i in f_value_corrs:\n print(\"columns: {0:<12} correlation: {1:<20} p-values: {2:<22} categories: {3:<22}\".format(i[0], i[1], i[2], i[3]))\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c1f1c7bfada2ab7855f6686f0056d6d70a0b54 | 53,444 | ipynb | Jupyter Notebook | code/.ipynb_checkpoints/Predict-checkpoint.ipynb | Jexers/PUBG-Finish-Placement-Prediction-Proj | d2741a67c4fc83eeb87c869dbcae456cb6f2279f | [
"MIT"
] | null | null | null | code/.ipynb_checkpoints/Predict-checkpoint.ipynb | Jexers/PUBG-Finish-Placement-Prediction-Proj | d2741a67c4fc83eeb87c869dbcae456cb6f2279f | [
"MIT"
] | null | null | null | code/.ipynb_checkpoints/Predict-checkpoint.ipynb | Jexers/PUBG-Finish-Placement-Prediction-Proj | d2741a67c4fc83eeb87c869dbcae456cb6f2279f | [
"MIT"
] | null | null | null | 40.39607 | 300 | 0.526663 | [
[
[
"%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport itertools\nimport gc\nimport os\nimport sys\n\nfrom sklearn import preprocessing\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import mean_squared_error,mean_absolute_error\nfrom sklearn.ensemble import RandomForestRegressor\nfrom timeit import default_timer as timer\n\nimport lightgbm as lgb\n\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.callbacks import ModelCheckpoint",
"Using TensorFlow backend.\n"
],
[
"# 通过类型转换节省内存空间\ndef reduce_mem_usage(df):\n \"\"\" iterate through all the columns of a dataframe and modify the data type\n to reduce memory usage.\n \"\"\"\n start_mem = df.memory_usage().sum() / 1024**2\n \n for col in df.columns:\n col_type = df[col].dtype\n \n if col_type != object:\n c_min = df[col].min()\n c_max = df[col].max()\n if str(col_type)[:3] == 'int':\n if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:\n df[col] = df[col].astype(np.int8)\n elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:\n df[col] = df[col].astype(np.int16)\n elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:\n df[col] = df[col].astype(np.int32)\n elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:\n df[col] = df[col].astype(np.int64) \n else:\n if c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:\n df[col] = df[col].astype(np.float32)\n else:\n df[col] = df[col].astype(np.float64)\n\n end_mem = df.memory_usage().sum() / 1024**2\n print('Memory usage of dataframe is {:.2f} MB --> {:.2f} MB (Decreased by {:.1f}%)'.format(\n start_mem, end_mem, 100 * (start_mem - end_mem) / start_mem))\n return df",
"_____no_output_____"
]
],
[
[
"# 加载数据集",
"_____no_output_____"
]
],
[
[
"def state(message,start = True, time = 0):\n if(start):\n print(f'Working on {message} ... ')\n else :\n print(f'Working on {message} took ({round(time , 3)}) Sec \\n')",
"_____no_output_____"
],
[
"# Import dataset\ndf_train = pd.read_csv('../input/train_V2.csv')\ndf_test = pd.read_csv('../input/test_V2.csv')\n\n# Reduce memory use\ndf_train=reduce_mem_usage(df_train)\ndf_test=reduce_mem_usage(df_test)\n\n# Show some data\ndf_train.head()\ndf_train.describe()",
"Memory usage of dataframe is 983.90 MB --> 339.28 MB (Decreased by 65.5%)\nMemory usage of dataframe is 413.18 MB --> 140.19 MB (Decreased by 66.1%)\n"
]
],
[
[
"# 对数据进行简单清洗",
"_____no_output_____"
]
],
[
[
"# 由于百分比是按照本局的最差名次来计算的,而不是小队的数量,并且本局最差名次与小队数量存在冗余,因此删除\n# 由于最远击杀距离统计并不准确 rankPoints官方建议谨慎使用,因此删除\ndf_train = df_train.drop(['longestKill', 'numGroups', 'rankPoints'], axis=1)\ndf_test = df_test.drop(['longestKill', 'numGroups', 'rankPoints'], axis=1)\n\n# 删除缺失值\ndf_train[df_train['winPlacePerc'].isnull()]\ndf_train.drop(2744604, inplace=True)",
"_____no_output_____"
]
],
[
[
"# 特征工程",
"_____no_output_____"
]
],
[
[
"def feature_engineering(df,is_train=True):\n if is_train: \n df = df[df['maxPlace'] > 1]\n\n state('totalDistance')\n s = timer()\n df['totalDistance'] = df['rideDistance'] + df[\"walkDistance\"] + df[\"swimDistance\"]\n e = timer()\n state('totalDistance', False, e - s)\n \n state('killPlace_over_maxPlace')\n s = timer()\n df['killPlace_over_maxPlace'] = df['killPlace'] / df['maxPlace']\n e = timer() \n state('killPlace_over_maxPlace', False, e - s)\n \n state('healsandboosts')\n s = timer()\n df['healsandboosts'] = df['heals'] + df['boosts']\n e = timer() \n state('healsandboosts', False, e - s)\n \n target = 'winPlacePerc'\n features = list(df.columns)\n \n # 去掉标称属性特征\n features.remove(\"Id\")\n features.remove(\"matchId\")\n features.remove(\"groupId\")\n features.remove(\"matchDuration\")\n features.remove(\"matchType\")\n \n y = None\n if is_train: \n y = np.array(df.groupby(['matchId', 'groupId'])[target].agg('mean'), dtype=np.float64)\n # 从特征中去掉百分比排名(预测目标)\n features.remove(target)\n \n # 统计同场比赛中同组内的各个特征的平均值及其在该场比赛下的百分比\n print(\"get group mean feature\")\n agg = df.groupby(['matchId', 'groupId'])[features].agg('mean')\n agg_rank = agg.groupby(['matchId'])[features].rank(pct=True).reset_index()\n \n \n #创建一个以matchId和groupId为索引的新数据集\n if is_train: \n df_out = agg.reset_index()[['matchId', 'groupId']]\n else: \n df_out = df[['matchId', 'groupId']]\n \n # 将新特征与df_out根据matchId和groupId合并\n df_out = df_out.merge(agg.reset_index(), suffixes=[\"\", \"\"], how='left', on=['matchId', 'groupId'])\n df_out = df_out.merge(agg_rank, suffixes=[\"_mean\", \"_mean_rank\"], how='left', on=['matchId', 'groupId'])\n \n # 统计同场比赛中同组内的各个特征的中值及其在该场比赛下的百分比\n print(\"get group median feature\")\n agg = df.groupby(['matchId','groupId'])[features].agg('median')\n agg_rank = agg.groupby('matchId')[features].rank(pct=True).reset_index()\n \n # 将新特征与df_out根据matchId和groupId合并\n df_out = df_out.merge(agg.reset_index(), suffixes=[\"\", \"\"], how='left', on=['matchId', 'groupId'])\n df_out = df_out.merge(agg_rank, suffixes=[\"_median\", \"_median_rank\"], how='left', on=['matchId', 'groupId'])\n \n # 统计同场比赛中同组内的各个特征的最大值及其在该场比赛下的百分比\n print(\"get group max feature\")\n agg = df.groupby(['matchId','groupId'])[features].agg('max')\n agg_rank = agg.groupby('matchId')[features].rank(pct=True).reset_index()\n \n # 将新特征与df_out根据matchId和groupId合并\n df_out = df_out.merge(agg.reset_index(), suffixes=[\"\", \"\"], how='left', on=['matchId', 'groupId'])\n df_out = df_out.merge(agg_rank, suffixes=[\"_max\", \"_max_rank\"], how='left', on=['matchId', 'groupId'])\n \n # 统计同场比赛中同组内的各个特征的最小值及其在该场比赛下的百分比\n print(\"get group min feature\")\n agg = df.groupby(['matchId','groupId'])[features].agg('min')\n agg_rank = agg.groupby('matchId')[features].rank(pct=True).reset_index()\n \n # 将新特征与df_out根据matchId和groupId合并\n df_out = df_out.merge(agg.reset_index(), suffixes=[\"\", \"\"], how='left', on=['matchId', 'groupId'])\n df_out = df_out.merge(agg_rank, suffixes=[\"_min\", \"_min_rank\"], how='left', on=['matchId', 'groupId'])\n \n # 统计同场比赛中同组内的各个特征的和及其在该场比赛下的百分比\n print(\"get group max feature\")\n agg = df.groupby(['matchId','groupId'])[features].agg('sum')\n agg_rank = agg.groupby('matchId')[features].rank(pct=True).reset_index()\n \n # 将新特征与df_out根据matchId和groupId合并\n print(\"get group sum feature\")\n df_out = df_out.merge(agg.reset_index(), suffixes=[\"\", \"\"], how='left', on=['matchId', 'groupId'])\n df_out = df_out.merge(agg_rank, suffixes=[\"_sum\", \"_sum_rank\"], how='left', on=['matchId', 'groupId'])\n \n # 统计同场比赛中每个小组的人员数量\n print(\"get group size feature\")\n agg = df.groupby(['matchId','groupId']).size().reset_index(name='group_size')\n \n # 将Group_size特征与df_out根据matchId和groupId合并\n df_out = df_out.merge(agg, how='left', on=['matchId', 'groupId'])\n \n # 统计同场比赛下的特征平均值\n print(\"get match mean feature\")\n agg = df.groupby(['matchId'])[features].agg('mean').reset_index()\n # 将新特征与df_out根据matchId合并\n df_out = df_out.merge(agg, suffixes=[\"\", \"_match_mean\"], how='left', on=['matchId'])\n \n # 统计同场比赛中小组数量\n print(\"get match size feature\")\n agg = df.groupby(['matchId']).size().reset_index(name='match_size')\n # 将新特征与df_out根据matchId合并\n df_out = df_out.merge(agg, how='left', on=['matchId'])\n \n # 删除matchId和groupId\n df_out.drop([\"matchId\", \"groupId\"], axis=1, inplace=True)\n df_out = reduce_mem_usage(df_out)\n \n X = np.array(df_out, dtype=np.float64)\n \n del df, df_out, agg, agg_rank\n gc.collect()\n\n return X, y\n",
"_____no_output_____"
],
[
"x_train, y_train = feature_engineering(df_train,True)\nx_test, _ = feature_engineering(df_test,False)",
"Working on totalDistance ... \nWorking on totalDistance took (0.019) Sec \n\nWorking on killPlace_over_maxPlace ... \nWorking on killPlace_over_maxPlace took (0.032) Sec \n\nWorking on healsandboosts ... \nWorking on healsandboosts took (0.009) Sec \n\nget group mean feature\nget group median feature\nget group max feature\nget group min feature\nget group max feature\nget group sum feature\nget group size feature\nget match mean feature\nget match size feature\nMemory usage of dataframe is 3425.02 MB --> 1867.14 MB (Decreased by 45.5%)\nWorking on totalDistance ... \nWorking on totalDistance took (0.012) Sec \n\nWorking on killPlace_over_maxPlace ... \nWorking on killPlace_over_maxPlace took (0.016) Sec \n\nWorking on healsandboosts ... \nWorking on healsandboosts took (0.005) Sec \n\nget group mean feature\nget group median feature\nget group max feature\nget group min feature\nget group max feature\nget group sum feature\nget group size feature\nget match mean feature\nget match size feature\nMemory usage of dataframe is 3268.58 MB --> 1780.01 MB (Decreased by 45.5%)\n"
]
],
[
[
"# 建立模型",
"_____no_output_____"
]
],
[
[
"# 将数据集划分为训练集和验证集\nrandom_seed=1\nx_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size = 0.05, random_state=random_seed)",
"_____no_output_____"
]
],
[
[
"## Random Forest",
"_____no_output_____"
]
],
[
[
"RF = RandomForestRegressor(n_estimators=10, min_samples_leaf=3, max_features=0.5, n_jobs=-1)",
"_____no_output_____"
],
[
"%%time\nRF.fit(x_train, y_train)",
"Wall time: 8min 26s\n"
],
[
"mae_train_RF = mean_absolute_error(RF.predict(x_train), y_train)\nmae_val_RF = mean_absolute_error(RF.predict(x_val), y_val)\nprint('mae train RF: ', mae_train_RF)\nprint('mae val RF: ', mae_val_RF)",
"mae train RF: 0.015503626984996665\nmae val RF: 0.032972121527289365\n"
]
],
[
[
"## LightGBM",
"_____no_output_____"
]
],
[
[
"def run_lgb(train_X, train_y, val_X, val_y, x_test):\n params = {\"objective\" : \"regression\", \n \"metric\" : \"mae\", \n 'n_estimators':20000, \n 'early_stopping_rounds':200,\n \"num_leaves\" : 31, \n \"learning_rate\" : 0.05, \n \"bagging_fraction\" : 0.7,\n \"bagging_seed\" : 0, \n \"num_threads\" : 4,\n \"colsample_bytree\" : 0.7\n }\n \n lgtrain = lgb.Dataset(train_X, label=train_y)\n lgval = lgb.Dataset(val_X, label=val_y)\n model = lgb.train(params, lgtrain, valid_sets=[lgtrain, lgval], early_stopping_rounds=200, verbose_eval=1000)\n \n pred_test_y = model.predict(x_test, num_iteration=model.best_iteration)\n return pred_test_y, model",
"_____no_output_____"
],
[
"%%time\n# 训练模型\npred_test_lgb, model = run_lgb(x_train, y_train, x_val, y_val, x_test)",
"D:\\Anaconda3\\envs\\tf\\lib\\site-packages\\lightgbm\\engine.py:177: UserWarning: Found `n_estimators` in params. Will use it instead of argument\n _log_warning(f\"Found `{alias}` in params. Will use it instead of argument\")\nD:\\Anaconda3\\envs\\tf\\lib\\site-packages\\lightgbm\\engine.py:181: UserWarning: 'early_stopping_rounds' argument is deprecated and will be removed in a future release of LightGBM. Pass 'early_stopping()' callback via 'callbacks' argument instead.\n _log_warning(\"'early_stopping_rounds' argument is deprecated and will be removed in a future release of LightGBM. \"\nD:\\Anaconda3\\envs\\tf\\lib\\site-packages\\lightgbm\\engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.\n _log_warning(\"'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. \"\n"
],
[
"mae_train_lgb = mean_absolute_error(model.predict(x_train, num_iteration=model.best_iteration), y_train)\nmae_val_lgb = mean_absolute_error(model.predict(x_val, num_iteration=model.best_iteration), y_val)\n\nprint('mae train lgb: ', mae_train_lgb)\nprint('mae val lgb: ', mae_val_lgb)",
"mae train lgb: 0.022016762514129016\nmae val lgb: 0.0270388973536143\n"
]
],
[
[
"## DNN",
"_____no_output_____"
]
],
[
[
"def run_DNN(x_train, y_train, x_val, y_val, x_test):\n NN_model = Sequential()\n NN_model.add(Dense(x_train.shape[1], input_dim = x_train.shape[1], activation='relu'))\n NN_model.add(Dense(136, activation='relu'))\n NN_model.add(Dense(136, activation='relu'))\n NN_model.add(Dense(136, activation='relu'))\n NN_model.add(Dense(136, activation='relu'))\n\n NN_model.add(Dense(1, activation='linear'))\n\n NN_model.compile(loss='mean_absolute_error', optimizer='adam', metrics=['mean_absolute_error'])\n NN_model.summary()\n \n checkpoint_name = 'Weights-{epoch:03d}--{val_loss:.5f}.hdf5' \n checkpoint = ModelCheckpoint(checkpoint_name, monitor='val_loss', verbose = 1, save_best_only = True, mode ='auto')\n callbacks_list = [checkpoint]\n \n NN_model.fit(x=x_train, \n y=y_train, \n batch_size=1000,\n epochs=30, \n verbose=1, \n callbacks=callbacks_list,\n validation_split=0.15, \n validation_data=None, \n shuffle=True,\n class_weight=None, \n sample_weight=None, \n initial_epoch=0,\n steps_per_epoch=None, \n validation_steps=None)\n\n pred_test_y = NN_model.predict(x_test)\n pred_test_y = pred_test_y.reshape(-1)\n return pred_test_y, NN_model",
"_____no_output_____"
],
[
"%%time\n# 训练模型\npred_test_DNN, model = run_DNN(x_train, y_train, x_val, y_val, x_test)",
"WARNING:tensorflow:From D:\\Anaconda3\\envs\\tf\\lib\\site-packages\\tensorflow_core\\python\\ops\\resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\nInstructions for updating:\nIf using Keras pass *_constraint arguments to layers.\nModel: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_1 (Dense) (None, 266) 71022 \n_________________________________________________________________\ndense_2 (Dense) (None, 136) 36312 \n_________________________________________________________________\ndense_3 (Dense) (None, 136) 18632 \n_________________________________________________________________\ndense_4 (Dense) (None, 136) 18632 \n_________________________________________________________________\ndense_5 (Dense) (None, 136) 18632 \n_________________________________________________________________\ndense_6 (Dense) (None, 1) 137 \n=================================================================\nTotal params: 163,367\nTrainable params: 163,367\nNon-trainable params: 0\n_________________________________________________________________\nWARNING:tensorflow:From D:\\Anaconda3\\envs\\tf\\lib\\site-packages\\keras\\backend\\tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.\n\nTrain on 1636595 samples, validate on 288811 samples\nEpoch 1/30\n1636595/1636595 [==============================] - 14s 9us/step - loss: 1.4802 - mean_absolute_error: 1.4802 - val_loss: 0.1726 - val_mean_absolute_error: 0.1726\n\nEpoch 00001: val_loss improved from inf to 0.17264, saving model to Weights-001--0.17264.hdf5\nEpoch 2/30\n1636595/1636595 [==============================] - 12s 8us/step - loss: 0.1296 - mean_absolute_error: 0.1296 - val_loss: 0.0984 - val_mean_absolute_error: 0.0984\n\nEpoch 00002: val_loss improved from 0.17264 to 0.09842, saving model to Weights-002--0.09842.hdf5\nEpoch 3/30\n1636595/1636595 [==============================] - 13s 8us/step - loss: 0.0861 - mean_absolute_error: 0.0861 - val_loss: 0.0752 - val_mean_absolute_error: 0.0752\n\nEpoch 00003: val_loss improved from 0.09842 to 0.07521, saving model to Weights-003--0.07521.hdf5\nEpoch 4/30\n1636595/1636595 [==============================] - 13s 8us/step - loss: 0.0749 - mean_absolute_error: 0.0749 - val_loss: 0.0657 - val_mean_absolute_error: 0.0657\n\nEpoch 00004: val_loss improved from 0.07521 to 0.06569, saving model to Weights-004--0.06569.hdf5\nEpoch 5/30\n1636595/1636595 [==============================] - 13s 8us/step - loss: 0.0664 - mean_absolute_error: 0.0664 - val_loss: 0.0615 - val_mean_absolute_error: 0.0615\n\nEpoch 00005: val_loss improved from 0.06569 to 0.06152, saving model to Weights-005--0.06152.hdf5\nEpoch 6/30\n1636595/1636595 [==============================] - 13s 8us/step - loss: 0.0618 - mean_absolute_error: 0.0618 - val_loss: 0.0573 - val_mean_absolute_error: 0.0573\n\nEpoch 00006: val_loss improved from 0.06152 to 0.05730, saving model to Weights-006--0.05730.hdf5\nEpoch 7/30\n1636595/1636595 [==============================] - 13s 8us/step - loss: 0.0575 - mean_absolute_error: 0.0575 - val_loss: 0.0562 - val_mean_absolute_error: 0.0562\n\nEpoch 00007: val_loss improved from 0.05730 to 0.05616, saving model to Weights-007--0.05616.hdf5\nEpoch 8/30\n1636595/1636595 [==============================] - 14s 8us/step - loss: 0.0548 - mean_absolute_error: 0.0548 - val_loss: 0.0519 - val_mean_absolute_error: 0.0519\n\nEpoch 00008: val_loss improved from 0.05616 to 0.05193, saving model to Weights-008--0.05193.hdf5\nEpoch 9/30\n1636595/1636595 [==============================] - 13s 8us/step - loss: 0.0528 - mean_absolute_error: 0.0528 - val_loss: 0.0506 - val_mean_absolute_error: 0.0506\n\nEpoch 00009: val_loss improved from 0.05193 to 0.05062, saving model to Weights-009--0.05062.hdf5\nEpoch 10/30\n1636595/1636595 [==============================] - 13s 8us/step - loss: 0.0512 - mean_absolute_error: 0.0512 - val_loss: 0.0559 - val_mean_absolute_error: 0.0559\n\nEpoch 00010: val_loss did not improve from 0.05062\nEpoch 11/30\n1636595/1636595 [==============================] - 13s 8us/step - loss: 0.0498 - mean_absolute_error: 0.0498 - val_loss: 0.0510 - val_mean_absolute_error: 0.0510\n\nEpoch 00011: val_loss did not improve from 0.05062\nEpoch 12/30\n1636595/1636595 [==============================] - 13s 8us/step - loss: 0.0489 - mean_absolute_error: 0.0489 - val_loss: 0.0488 - val_mean_absolute_error: 0.0488\n\nEpoch 00012: val_loss improved from 0.05062 to 0.04885, saving model to Weights-012--0.04885.hdf5\nEpoch 13/30\n1636595/1636595 [==============================] - 14s 8us/step - loss: 0.0478 - mean_absolute_error: 0.0478 - val_loss: 0.0451 - val_mean_absolute_error: 0.0451\n\nEpoch 00013: val_loss improved from 0.04885 to 0.04513, saving model to Weights-013--0.04513.hdf5\nEpoch 14/30\n1636595/1636595 [==============================] - 13s 8us/step - loss: 0.0470 - mean_absolute_error: 0.0470 - val_loss: 0.0457 - val_mean_absolute_error: 0.0457\n\nEpoch 00014: val_loss did not improve from 0.04513\nEpoch 15/30\n1636595/1636595 [==============================] - 12s 8us/step - loss: 0.0464 - mean_absolute_error: 0.0464 - val_loss: 0.0466 - val_mean_absolute_error: 0.0466\n\nEpoch 00015: val_loss did not improve from 0.04513\nEpoch 16/30\n1636595/1636595 [==============================] - 13s 8us/step - loss: 0.0459 - mean_absolute_error: 0.0459 - val_loss: 0.0481 - val_mean_absolute_error: 0.0481\n\nEpoch 00016: val_loss did not improve from 0.04513\nEpoch 17/30\n1636595/1636595 [==============================] - 12s 8us/step - loss: 0.0457 - mean_absolute_error: 0.0457 - val_loss: 0.0467 - val_mean_absolute_error: 0.0467\n\nEpoch 00017: val_loss did not improve from 0.04513\nEpoch 18/30\n1636595/1636595 [==============================] - 12s 8us/step - loss: 0.0450 - mean_absolute_error: 0.0450 - val_loss: 0.0451 - val_mean_absolute_error: 0.0451\n\nEpoch 00018: val_loss improved from 0.04513 to 0.04509, saving model to Weights-018--0.04509.hdf5\nEpoch 19/30\n1636595/1636595 [==============================] - 13s 8us/step - loss: 0.0446 - mean_absolute_error: 0.0446 - val_loss: 0.0434 - val_mean_absolute_error: 0.0434\n\nEpoch 00019: val_loss improved from 0.04509 to 0.04345, saving model to Weights-019--0.04345.hdf5\nEpoch 20/30\n1636595/1636595 [==============================] - 13s 8us/step - loss: 0.0443 - mean_absolute_error: 0.0443 - val_loss: 0.0421 - val_mean_absolute_error: 0.0421\n\nEpoch 00020: val_loss improved from 0.04345 to 0.04206, saving model to Weights-020--0.04206.hdf5\nEpoch 21/30\n1636595/1636595 [==============================] - 12s 8us/step - loss: 0.0440 - mean_absolute_error: 0.0440 - val_loss: 0.0446 - val_mean_absolute_error: 0.0446\n\nEpoch 00021: val_loss did not improve from 0.04206\nEpoch 22/30\n1636595/1636595 [==============================] - 13s 8us/step - loss: 0.0438 - mean_absolute_error: 0.0438 - val_loss: 0.0450 - val_mean_absolute_error: 0.0450\n\nEpoch 00022: val_loss did not improve from 0.04206\nEpoch 23/30\n1636595/1636595 [==============================] - 12s 8us/step - loss: 0.0435 - mean_absolute_error: 0.0435 - val_loss: 0.0425 - val_mean_absolute_error: 0.0425\n\nEpoch 00023: val_loss did not improve from 0.04206\nEpoch 24/30\n1636595/1636595 [==============================] - 12s 8us/step - loss: 0.0434 - mean_absolute_error: 0.0434 - val_loss: 0.0415 - val_mean_absolute_error: 0.0415\n\nEpoch 00024: val_loss improved from 0.04206 to 0.04153, saving model to Weights-024--0.04153.hdf5\nEpoch 25/30\n1636595/1636595 [==============================] - 14s 8us/step - loss: 0.0431 - mean_absolute_error: 0.0431 - val_loss: 0.0481 - val_mean_absolute_error: 0.0481\n\nEpoch 00025: val_loss did not improve from 0.04153\nEpoch 26/30\n1636595/1636595 [==============================] - 13s 8us/step - loss: 0.0431 - mean_absolute_error: 0.0431 - val_loss: 0.0416 - val_mean_absolute_error: 0.0416\n\nEpoch 00026: val_loss did not improve from 0.04153\nEpoch 27/30\n1636595/1636595 [==============================] - 14s 8us/step - loss: 0.0427 - mean_absolute_error: 0.0427 - val_loss: 0.0446 - val_mean_absolute_error: 0.0446\n\nEpoch 00027: val_loss did not improve from 0.04153\nEpoch 28/30\n1636595/1636595 [==============================] - 13s 8us/step - loss: 0.0426 - mean_absolute_error: 0.0426 - val_loss: 0.0428 - val_mean_absolute_error: 0.0428\n\nEpoch 00028: val_loss did not improve from 0.04153\nEpoch 29/30\n1636595/1636595 [==============================] - 13s 8us/step - loss: 0.0425 - mean_absolute_error: 0.0425 - val_loss: 0.0456 - val_mean_absolute_error: 0.0456\n\nEpoch 00029: val_loss did not improve from 0.04153\nEpoch 30/30\n1636595/1636595 [==============================] - 13s 8us/step - loss: 0.0423 - mean_absolute_error: 0.0423 - val_loss: 0.0410 - val_mean_absolute_error: 0.0410\n\nEpoch 00030: val_loss improved from 0.04153 to 0.04104, saving model to Weights-030--0.04104.hdf5\nWall time: 7min 37s\n"
],
[
"mae_train_DNN = mean_absolute_error(model.predict(x_train), y_train)\nmae_val_DNN = mean_absolute_error(model.predict(x_val), y_val)\nprint('mae train dnn: ', mae_train_DNN)\nprint('mae val dnn: ', mae_val_DNN)",
"mae train dnn: 0.04090054122667123\nmae val dnn: 0.040935060316103625\n"
]
],
[
[
"# 使用训练好的模型进行预测",
"_____no_output_____"
],
[
"## Random Forest",
"_____no_output_____"
]
],
[
[
"pred_test_RF = RF.predict(x_test)\ndf_test['winPlacePerc_RF'] = pred_test_RF\nsubmission = df_test[['Id', 'winPlacePerc_RF']]\nsubmission.to_csv('../output/submission_RF.csv', index=False)",
"_____no_output_____"
]
],
[
[
"## LightGBM",
"_____no_output_____"
]
],
[
[
"df_test['winPlacePerc_lgb'] = pred_test_lgb\nsubmission = df_test[['Id', 'winPlacePerc_lgb']]\nsubmission.to_csv('../output/submission_lgb.csv', index=False)",
"_____no_output_____"
]
],
[
[
"## DNN",
"_____no_output_____"
]
],
[
[
"df_test['winPlacePerc_DNN'] = pred_test_DNN\nsubmission = df_test[['Id', 'winPlacePerc_DNN']]\nsubmission.to_csv('../output/submission_DNN.csv', index=False)",
"_____no_output_____"
]
],
[
[
"## 根据验证集上的MAE值为模型划分权重进行集成(RF + DNN + LightGBM)",
"_____no_output_____"
]
],
[
[
"weight_DNN = (1 - mae_val_DNN) / (3 - mae_val_DNN - mae_val_RF - mae_val_lgb)\nweight_RF = (1 - mae_val_RF) / (3 - mae_val_DNN - mae_val_RF - mae_val_lgb)\nweight_lgb = (1 - mae_val_lgb) / (3 - mae_val_DNN - mae_val_RF - mae_val_lgb)\n\ndf_test['winPlacePerc'] = df_test.apply(lambda x: x['winPlacePerc_RF'] * weight_RF + x['winPlacePerc_DNN'] * weight_DNN + x['winPlacePerc_lgb'] * weight_lgb, axis=1)\nsubmission = df_test[['Id', 'winPlacePerc']]\nsubmission.to_csv('../output/submission.csv', index=False)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c1f2d6b91ea546b11d3f96eb98605cf44cb654 | 98,517 | ipynb | Jupyter Notebook | docs/notebooks/ClassDiagram.ipynb | bjrnmath/debuggingbook | 8b6cd36fc75a89464e9252e40e1d4edcb6a70559 | [
"MIT"
] | null | null | null | docs/notebooks/ClassDiagram.ipynb | bjrnmath/debuggingbook | 8b6cd36fc75a89464e9252e40e1d4edcb6a70559 | [
"MIT"
] | null | null | null | docs/notebooks/ClassDiagram.ipynb | bjrnmath/debuggingbook | 8b6cd36fc75a89464e9252e40e1d4edcb6a70559 | [
"MIT"
] | null | null | null | 35.798328 | 224 | 0.522976 | [
[
[
"# Class Diagrams\n\nThis is a simple viewer for class diagrams. Customized towards the book.",
"_____no_output_____"
],
[
"**Prerequisites**\n\n* _Refer to earlier chapters as notebooks here, as here:_ [Earlier Chapter](Debugger.ipynb).",
"_____no_output_____"
]
],
[
[
"import bookutils",
"_____no_output_____"
]
],
[
[
"## Synopsis\n<!-- Automatically generated. Do not edit. -->\n\nTo [use the code provided in this chapter](Importing.ipynb), write\n\n```python\n>>> from debuggingbook.ClassDiagram import <identifier>\n```\n\nand then make use of the following features.\n\n\nThe function `display_class_hierarchy()` function shows the class hierarchy for the given class (or list of classes). \n* The keyword parameter `public_methods`, if given, is a list of \"public\" methods to be used by clients (default: all methods with docstrings).\n* The keyword parameter `abstract_classes`, if given, is a list of classes to be displayed as \"abstract\" (i.e. with a cursive class name).\n\n```python\n>>> display_class_hierarchy(D_Class, abstract_classes=[A_Class])\n```\n\n",
"_____no_output_____"
],
[
"## Getting a Class Hierarchy",
"_____no_output_____"
]
],
[
[
"import inspect",
"_____no_output_____"
]
],
[
[
"Using `mro()`, we can access the class hierarchy. We make sure to avoid duplicates created by `class X(X)`.",
"_____no_output_____"
]
],
[
[
"# ignore\nfrom typing import Callable, Dict, Type, Set, List, Union, Any, Tuple, Optional",
"_____no_output_____"
],
[
"def class_hierarchy(cls: Type) -> List[Type]:\n superclasses = cls.mro()\n hierarchy = []\n last_superclass_name = \"\"\n\n for superclass in superclasses:\n if superclass.__name__ != last_superclass_name:\n hierarchy.append(superclass)\n last_superclass_name = superclass.__name__\n\n return hierarchy",
"_____no_output_____"
]
],
[
[
"Here's an example:",
"_____no_output_____"
]
],
[
[
"class A_Class:\n \"\"\"A Class which does A thing right.\n Comes with a longer docstring.\"\"\"\n\n def foo(self) -> None:\n \"\"\"The Adventures of the glorious Foo\"\"\"\n pass\n\n def quux(self) -> None:\n \"\"\"A method that is not used.\"\"\"\n pass",
"_____no_output_____"
],
[
"class A_Class(A_Class):\n # We define another function in a separate cell.\n\n def second(self) -> None:\n pass",
"_____no_output_____"
],
[
"class B_Class(A_Class):\n \"\"\"A subclass inheriting some methods.\"\"\"\n\n VAR = \"A variable\"\n\n def foo(self) -> None:\n \"\"\"A WW2 foo fighter.\"\"\"\n pass\n\n def bar(self, qux: Any = None, bartender: int = 42) -> None:\n \"\"\"A qux walks into a bar.\n `bartender` is an optional attribute.\"\"\"\n pass",
"_____no_output_____"
],
[
"class C_Class:\n \"\"\"A class injecting some method\"\"\"\n\n def qux(self) -> None:\n pass",
"_____no_output_____"
],
[
"class D_Class(B_Class, C_Class):\n \"\"\"A subclass inheriting from multiple superclasses.\n Comes with a fairly long, but meaningless documentation.\"\"\"\n\n def foo(self) -> None:\n B_Class.foo(self)",
"_____no_output_____"
],
[
"class D_Class(D_Class):\n pass # An incremental addiiton that should not impact D's semantics",
"_____no_output_____"
],
[
"class_hierarchy(D_Class)",
"_____no_output_____"
]
],
[
[
"## Getting a Class Tree",
"_____no_output_____"
],
[
"We can use `__bases__` to obtain the immediate base classes.",
"_____no_output_____"
]
],
[
[
"D_Class.__bases__",
"_____no_output_____"
]
],
[
[
"`class_tree()` returns a class tree, using the \"lowest\" (most specialized) class with the same name.",
"_____no_output_____"
]
],
[
[
"def class_tree(cls: Type, lowest: Type = None) -> List[Tuple[Type, List]]:\n ret = []\n for base in cls.__bases__:\n if base.__name__ == cls.__name__:\n if not lowest:\n lowest = cls\n ret += class_tree(base, lowest)\n else:\n if lowest:\n cls = lowest\n ret.append((cls, class_tree(base)))\n\n return ret",
"_____no_output_____"
],
[
"class_tree(D_Class)",
"_____no_output_____"
],
[
"class_tree(D_Class)[0][0]",
"_____no_output_____"
],
[
"assert class_tree(D_Class)[0][0] == D_Class",
"_____no_output_____"
]
],
[
[
"`class_set()` flattens the tree into a set:",
"_____no_output_____"
]
],
[
[
"def class_set(classes: Union[Type, List[Type]]) -> Set[Type]:\n if not isinstance(classes, list):\n classes = [classes]\n\n ret = set()\n\n def traverse_tree(tree: List[Tuple[Type, List]]) -> None:\n for (cls, subtrees) in tree:\n ret.add(cls)\n for subtree in subtrees:\n traverse_tree(subtrees)\n\n for cls in classes:\n traverse_tree(class_tree(cls))\n\n return ret",
"_____no_output_____"
],
[
"class_set(D_Class)",
"_____no_output_____"
],
[
"assert A_Class in class_set(D_Class)",
"_____no_output_____"
],
[
"assert B_Class in class_set(D_Class)",
"_____no_output_____"
],
[
"assert C_Class in class_set(D_Class)",
"_____no_output_____"
],
[
"assert D_Class in class_set(D_Class)",
"_____no_output_____"
],
[
"class_set([B_Class, C_Class])",
"_____no_output_____"
]
],
[
[
"### Getting Docs",
"_____no_output_____"
]
],
[
[
"A_Class.__doc__",
"_____no_output_____"
],
[
"A_Class.__bases__[0].__doc__",
"_____no_output_____"
],
[
"A_Class.__bases__[0].__name__",
"_____no_output_____"
],
[
"D_Class.foo",
"_____no_output_____"
],
[
"D_Class.foo.__doc__",
"_____no_output_____"
],
[
"A_Class.foo.__doc__",
"_____no_output_____"
],
[
"def docstring(obj: Any) -> str:\n doc = inspect.getdoc(obj)\n return doc if doc else \"\"",
"_____no_output_____"
],
[
"docstring(A_Class)",
"_____no_output_____"
],
[
"docstring(D_Class.foo)",
"_____no_output_____"
],
[
"def unknown() -> None:\n pass",
"_____no_output_____"
],
[
"docstring(unknown)",
"_____no_output_____"
],
[
"import html",
"_____no_output_____"
],
[
"import re",
"_____no_output_____"
],
[
"def escape(text: str) -> str:\n text = html.escape(text)\n assert '<' not in text\n assert '>' not in text\n text = text.replace('{', '{')\n text = text.replace('|', '|')\n text = text.replace('}', '}')\n return text",
"_____no_output_____"
],
[
"escape(\"f(foo={})\")",
"_____no_output_____"
],
[
"def escape_doc(docstring: str) -> str:\n DOC_INDENT = 0\n docstring = \"
\".join(\n ' ' * DOC_INDENT + escape(line).strip()\n for line in docstring.split('\\n')\n )\n return docstring",
"_____no_output_____"
],
[
"print(escape_doc(\"'Hello\\n {You|Me}'\"))",
"'Hello
{You|Me}'\n"
]
],
[
[
"## Getting Methods and Variables",
"_____no_output_____"
]
],
[
[
"inspect.getmembers(D_Class)",
"_____no_output_____"
],
[
"def class_items(cls: Type, pred: Callable) -> List[Tuple[str, Any]]:\n def _class_items(cls: Type) -> List:\n all_items = inspect.getmembers(cls, pred)\n for base in cls.__bases__:\n all_items += _class_items(base)\n\n return all_items\n\n unique_items = []\n items_seen = set()\n for (name, item) in _class_items(cls):\n if name not in items_seen:\n unique_items.append((name, item))\n items_seen.add(name)\n\n return unique_items",
"_____no_output_____"
],
[
"def class_methods(cls: Type) -> List[Tuple[str, Callable]]:\n return class_items(cls, inspect.isfunction)",
"_____no_output_____"
],
[
"def defined_in(name: str, cls: Type) -> bool:\n if not hasattr(cls, name):\n return False\n\n defining_classes = []\n\n def search_superclasses(name: str, cls: Type) -> None:\n if not hasattr(cls, name):\n return\n\n for base in cls.__bases__:\n if hasattr(base, name):\n defining_classes.append(base)\n search_superclasses(name, base)\n\n search_superclasses(name, cls)\n\n if any(cls.__name__ != c.__name__ for c in defining_classes):\n return False # Already defined in superclass\n\n return True",
"_____no_output_____"
],
[
"assert not defined_in('VAR', A_Class)",
"_____no_output_____"
],
[
"assert defined_in('VAR', B_Class)",
"_____no_output_____"
],
[
"assert not defined_in('VAR', C_Class)",
"_____no_output_____"
],
[
"assert not defined_in('VAR', D_Class)",
"_____no_output_____"
],
[
"def class_vars(cls: Type) -> List[Any]:\n def is_var(item: Any) -> bool:\n return not callable(item)\n\n return [item for item in class_items(cls, is_var) \n if not item[0].startswith('__') and defined_in(item[0], cls)]",
"_____no_output_____"
],
[
"class_methods(D_Class)",
"_____no_output_____"
],
[
"class_vars(B_Class)",
"_____no_output_____"
]
],
[
[
"We're only interested in \n\n* functions _defined_ in that class\n* functions that come with a docstring",
"_____no_output_____"
]
],
[
[
"def public_class_methods(cls: Type) -> List[Tuple[str, Callable]]:\n return [(name, method) for (name, method) in class_methods(cls) \n if method.__qualname__.startswith(cls.__name__)]",
"_____no_output_____"
],
[
"def doc_class_methods(cls: Type) -> List[Tuple[str, Callable]]:\n return [(name, method) for (name, method) in public_class_methods(cls) \n if docstring(method) is not None]",
"_____no_output_____"
],
[
"public_class_methods(D_Class)",
"_____no_output_____"
],
[
"doc_class_methods(D_Class)",
"_____no_output_____"
],
[
"def overloaded_class_methods(classes: Union[Type, List[Type]]) -> Set[str]:\n all_methods: Dict[str, Set[Callable]] = {}\n for cls in class_set(classes):\n for (name, method) in class_methods(cls):\n if method.__qualname__.startswith(cls.__name__):\n all_methods.setdefault(name, set())\n all_methods[name].add(cls)\n\n return set(name for name in all_methods if len(all_methods[name]) >= 2)",
"_____no_output_____"
],
[
"overloaded_class_methods(D_Class)",
"_____no_output_____"
]
],
[
[
"## Drawing Class Hierarchy with Method Names",
"_____no_output_____"
]
],
[
[
"from inspect import signature",
"_____no_output_____"
],
[
"import warnings",
"_____no_output_____"
],
[
"def display_class_hierarchy(classes: Union[Type, List[Type]], \n public_methods: Optional[List] = None,\n abstract_classes: Optional[List] = None,\n include_methods: bool = True,\n include_class_vars: bool =True,\n include_legend: bool = True,\n project: str = 'fuzzingbook',\n log: bool = False) -> Any:\n \"\"\"Visualize a class hierarchy.\n`classes` is a Python class (or a list of classes) to be visualized.\n`public_methods`, if given, is a list of methods to be shown as \"public\" (bold).\n (Default: all methods with a docstring)\n`abstract_classes`, if given, is a list of classes to be shown as \"abstract\" (cursive).\n (Default: all classes with an abstract method)\n`include_methods`: if True, include all methods (default)\n`include_legend`: if True, include a legend (default)\n \"\"\"\n from graphviz import Digraph\n\n if project == 'debuggingbook':\n CLASS_FONT = 'Raleway, Helvetica, Arial, sans-serif'\n CLASS_COLOR = '#6A0DAD' # HTML 'purple'\n else:\n CLASS_FONT = 'Patua One, Helvetica, sans-serif'\n CLASS_COLOR = '#B03A2E'\n\n METHOD_FONT = \"'Fira Mono', 'Source Code Pro', 'Courier', monospace\"\n METHOD_COLOR = 'black'\n\n if isinstance(classes, list):\n starting_class = classes[0]\n else:\n starting_class = classes\n classes = [starting_class]\n\n title = starting_class.__name__ + \" class hierarchy\"\n\n dot = Digraph(comment=title)\n dot.attr('node', shape='record', fontname=CLASS_FONT)\n dot.attr('graph', rankdir='BT', tooltip=title)\n dot.attr('edge', arrowhead='empty')\n edges = set()\n overloaded_methods: Set[str] = set()\n\n drawn_classes = set()\n\n def method_string(method_name: str, public: bool, overloaded: bool,\n fontsize: float = 10.0) -> str:\n method_string = f'<font face=\"{METHOD_FONT}\" point-size=\"{str(fontsize)}\">'\n\n if overloaded:\n name = f'<i>{method_name}()</i>'\n else:\n name = f'{method_name}()'\n\n if public:\n method_string += f'<b>{name}</b>'\n else:\n method_string += f'<font color=\"{METHOD_COLOR}\">' \\\n f'{name}</font>'\n\n method_string += '</font>'\n return method_string\n\n def var_string(var_name: str, fontsize: int = 10) -> str:\n var_string = f'<font face=\"{METHOD_FONT}\" point-size=\"{str(fontsize)}\">'\n var_string += f'{var_name}'\n var_string += '</font>'\n return var_string\n\n def is_overloaded(method_name: str, f: Any) -> bool:\n return (method_name in overloaded_methods or\n (docstring(f) is not None and \"in subclasses\" in docstring(f)))\n\n def is_abstract(cls: Type) -> bool:\n if not abstract_classes:\n return inspect.isabstract(cls)\n\n return (cls in abstract_classes or\n any(c.__name__ == cls.__name__ for c in abstract_classes))\n\n def is_public(method_name: str, f: Any) -> bool:\n if public_methods:\n return (method_name in public_methods or\n f in public_methods or\n any(f.__qualname__ == m.__qualname__\n for m in public_methods))\n\n return bool(docstring(f))\n\n def class_vars_string(cls: Type, url: str) -> str:\n cls_vars = class_vars(cls)\n if len(cls_vars) == 0:\n return \"\"\n\n vars_string = f'<table border=\"0\" cellpadding=\"0\" ' \\\n f'cellspacing=\"0\" ' \\\n f'align=\"left\" tooltip=\"{cls.__name__}\" href=\"#\">'\n\n for (name, var) in cls_vars:\n if log:\n print(f\" Drawing {name}\")\n\n var_doc = escape(f\"{name} = {repr(var)}\")\n tooltip = f' tooltip=\"{var_doc}\"'\n href = f' href=\"{url}\"'\n vars_string += f'<tr><td align=\"left\" border=\"0\"' \\\n f'{tooltip}{href}>'\n\n vars_string += var_string(name)\n vars_string += '</td></tr>'\n\n vars_string += '</table>'\n return vars_string\n\n def class_methods_string(cls: Type, url: str) -> str:\n methods = public_class_methods(cls)\n # return \"<br/>\".join([name + \"()\" for (name, f) in methods])\n if len(methods) == 0:\n return \"\"\n\n methods_string = f'<table border=\"0\" cellpadding=\"0\" ' \\\n f'cellspacing=\"0\" ' \\\n f'align=\"left\" tooltip=\"{cls.__name__}\" href=\"#\">'\n\n for public in [True, False]:\n for (name, f) in methods:\n if public != is_public(name, f):\n continue\n\n if log:\n print(f\" Drawing {name}()\")\n\n if is_public(name, f) and not docstring(f):\n warnings.warn(f\"{f.__qualname__}() is listed as public,\"\n f\" but has no docstring\")\n\n overloaded = is_overloaded(name, f)\n\n method_doc = escape(name + str(inspect.signature(f)))\n if docstring(f):\n method_doc += \":
\" + escape_doc(docstring(f))\n\n # Tooltips are only shown if a href is present, too\n tooltip = f' tooltip=\"{method_doc}\"'\n href = f' href=\"{url}\"'\n methods_string += f'<tr><td align=\"left\" border=\"0\"' \\\n f'{tooltip}{href}>'\n\n methods_string += method_string(name, public, overloaded)\n\n methods_string += '</td></tr>'\n\n methods_string += '</table>'\n return methods_string\n\n def display_class_node(cls: Type) -> None:\n name = cls.__name__\n\n if name in drawn_classes:\n return\n drawn_classes.add(name)\n\n if log:\n print(f\"Drawing class {name}\")\n\n if cls.__module__ == '__main__':\n url = '#'\n else:\n url = cls.__module__ + '.ipynb'\n\n if is_abstract(cls):\n formatted_class_name = f'<i>{cls.__name__}</i>'\n else:\n formatted_class_name = cls.__name__\n\n if include_methods or include_class_vars:\n vars = class_vars_string(cls, url)\n methods = class_methods_string(cls, url)\n spec = '<{<b><font color=\"' + CLASS_COLOR + '\">' + \\\n formatted_class_name + '</font></b>'\n if include_class_vars and vars:\n spec += '|' + vars\n if include_methods and methods:\n spec += '|' + methods\n spec += '}>'\n else:\n spec = '<' + formatted_class_name + '>'\n\n class_doc = escape('class ' + cls.__name__)\n if docstring(cls):\n class_doc += ':
' + escape_doc(docstring(cls))\n else:\n warnings.warn(f\"Class {cls.__name__} has no docstring\")\n\n dot.node(name, spec, tooltip=class_doc, href=url)\n\n def display_class_trees(trees: List[Tuple[Type, List]]) -> None:\n for tree in trees:\n (cls, subtrees) = tree\n display_class_node(cls)\n\n for subtree in subtrees:\n (subcls, _) = subtree\n\n if (cls.__name__, subcls.__name__) not in edges:\n dot.edge(cls.__name__, subcls.__name__)\n edges.add((cls.__name__, subcls.__name__))\n\n display_class_trees(subtrees)\n\n def display_legend() -> None:\n fontsize = 8.0\n\n label = f'<b><font color=\"{CLASS_COLOR}\">Legend</font></b><br align=\"left\"/>' \n\n for item in [\n method_string(\"public_method\",\n public=True, overloaded=False, fontsize=fontsize),\n method_string(\"private_method\",\n public=False, overloaded=False, fontsize=fontsize),\n method_string(\"overloaded_method\",\n public=False, overloaded=True, fontsize=fontsize)\n ]:\n label += '• ' + item + '<br align=\"left\"/>'\n\n label += f'<font face=\"Helvetica\" point-size=\"{str(fontsize + 1)}\">' \\\n 'Hover over names to see doc' \\\n '</font><br align=\"left\"/>'\n\n dot.node('Legend', label=f'<{label}>', shape='plain', fontsize=str(fontsize + 2))\n\n for cls in classes:\n tree = class_tree(cls)\n overloaded_methods = overloaded_class_methods(cls)\n display_class_trees(tree)\n\n if include_legend:\n display_legend()\n\n return dot",
"_____no_output_____"
],
[
"display_class_hierarchy(D_Class, project='debuggingbook', log=True)",
"Drawing class D_Class\n Drawing foo()\nDrawing class B_Class\n Drawing VAR\n Drawing bar()\n Drawing foo()\nDrawing class A_Class\n Drawing foo()\n Drawing quux()\n Drawing second()\nDrawing class C_Class\n Drawing qux()\n"
],
[
"display_class_hierarchy(D_Class, project='fuzzingbook')",
"_____no_output_____"
]
],
[
[
"Here is a variant with abstract classes and logging:",
"_____no_output_____"
]
],
[
[
"display_class_hierarchy([A_Class, B_Class],\n abstract_classes=[A_Class],\n public_methods=[\n A_Class.quux,\n ], log=True)",
"Drawing class A_Class\n Drawing quux()\n Drawing foo()\n Drawing second()\nDrawing class B_Class\n Drawing VAR\n Drawing bar()\n Drawing foo()\n"
]
],
[
[
"## Synopsis",
"_____no_output_____"
],
[
"The function `display_class_hierarchy()` function shows the class hierarchy for the given class (or list of classes). \n* The keyword parameter `public_methods`, if given, is a list of \"public\" methods to be used by clients (default: all methods with docstrings).\n* The keyword parameter `abstract_classes`, if given, is a list of classes to be displayed as \"abstract\" (i.e. with a cursive class name).",
"_____no_output_____"
]
],
[
[
"display_class_hierarchy(D_Class, abstract_classes=[A_Class])",
"_____no_output_____"
]
],
[
[
"## Exercises",
"_____no_output_____"
],
[
"Enjoy!",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d0c1f4588b5920c25220ed5165105c1b13ab3a03 | 305,163 | ipynb | Jupyter Notebook | mltrain-nips-2017/ben_athiwaratkun/pytorch-bayesgan/Bayesian GAN in PyTorch.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | 1 | 2019-05-10T09:16:23.000Z | 2019-05-10T09:16:23.000Z | mltrain-nips-2017/ben_athiwaratkun/pytorch-bayesgan/Bayesian GAN in PyTorch.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | null | null | null | mltrain-nips-2017/ben_athiwaratkun/pytorch-bayesgan/Bayesian GAN in PyTorch.ipynb | gopala-kr/ds-notebooks | bc35430ecdd851f2ceab8f2437eec4d77cb59423 | [
"MIT"
] | 1 | 2019-05-10T09:17:28.000Z | 2019-05-10T09:17:28.000Z | 78.307159 | 30,254 | 0.560779 | [
[
[
"# Bayesian GAN",
"_____no_output_____"
],
[
"Bayesian GAN (Saatchi and Wilson, 2017) is a Bayesian formulation of Generative Adversarial Networks (Goodfellow, 2014) where we learn the **distributions** of the generator parameters $\\theta_g$ and the discriminator parameters $\\theta_d$ instead of optimizing for point estimates. The benefits of the Bayesian approach include the flexibility to model **multimodality** in the parameter space, as well as the ability to **prevent mode collapse** in the maximum likelihood (non-Bayesian) case.\n\n\nWe learn Bayesian GAN via an approximate inference algorithm called **Stochastic Gradient Hamiltonian Monte Carlo (SGHMC)** which is a gradient-based MCMC methods whose samples approximate the true posterior distributions of $\\theta_g$ and $\\theta_d$.",
"_____no_output_____"
],
[
"The Bayesian GAN training process starts from sampling noise $z$ from a fixed distribution (typically standard d-dim normal). The noise is fed to the generator where the parameters $\\theta_g$ are sampled from the posterior distribution $p(\\theta_g | D)$. The generated image given the parameters $\\theta_g$ ($G(z|\\theta_g)$) as well as the real data are presented to the discriminator, whose parameters are sample from its posterior distribution $p(\\theta_d|D)$. We update the posteriors using the gradients $\\frac{\\partial \\log p(\\theta_g|D) }{\\partial \\theta_g }$ and $\\frac{\\partial \\log p(\\theta_d|D) }{\\partial \\theta_d }$ with Stochastic Gradient Hamiltonian Monte Carlo (SGHMC). Next section explains the intuition behind SGHMC.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"<img src=\"figs/graphics_bayesgan.pdf\">",
"_____no_output_____"
],
[
"# Learning Posterior Distributions \n\nThere are many approaches to estimate the posterior distribution of model parameters, namely, Markov Chain Monte Carlo (MCMC), Variational Inference (VI), Approximate Bayesian Computation (ABC), etc. Bayesian GAN uses SGHMC (Chen, 2014), a stochastic version of HMC (Neal, 2012), which is an MCMC method that (1) uses gradient to perform sampling efficiently (2) stochastic gradient from minibatch to handle large amount of data. \n\nBelow we show the visualization of samples generated from HMC. Once the algorithm runs for a while, we can see that the high density region has higher concentration of points. HMC can also handle multimodality (the second visualization).",
"_____no_output_____"
]
],
[
[
"from IPython.display import HTML\nHTML('<iframe width=\"1000\" height=\"400\" src=\"https://chi-feng.github.io/mcmc-demo/app.html#HamiltonianMC,banana\" frameborder=\"0\" allowfullscreen></iframe>')",
"_____no_output_____"
]
],
[
[
"Hamiltonian Monte Carlo allows us to learn arbitrary distributions, including multimodal distributions where other Bayesian approach such as variational inference cannot model. ",
"_____no_output_____"
]
],
[
[
"HTML('<iframe width=\"1000\" height=\"400\" src=\"https://chi-feng.github.io/mcmc-demo/app.html#HamiltonianMC,multimodal\" frameborder=\"0\" allowfullscreen></iframe>')",
"_____no_output_____"
]
],
[
[
"# Training\n\nWe show that Bayesian GAN can capture the data distribution by measuring its performance in the semi-supervised setting. We will perform the posterior update as outline in Algorithm 1 in Saatchi (2017). This algorithm can be implemented quite simply by adding noise to standard optimizers such as SGD with momentum and keep track of the parameters we sample from the posterior. ",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"### SGHMC by Optimizing a Noisy Loss\n\nFirst, observe that the update rules are similar to momentum SGD except for the noise $\\boldsymbol{n}$. In fact, without $\\boldsymbol{n}$, this is equivalent to performing momentum SGD with the loss is $- \\sum_{i=1}{J_g} \\sum_{k=1}^{J_d} \\log \\text{posterior} $. We will describe the case where $J_g = J_d=1$ for simplicity. \n\nWe use the main loss $\\mathcal{L} = - \\log p(\\theta | ..)$ and add a noise loss $\\mathcal{L}_\\text{noise} = \\frac{1}{\\eta} \\theta \\cdot \\boldsymbol{n}$ where $\\boldsymbol{n} \\sim \\mathcal{N}(0, 2 \\alpha \\eta I)$ so that optimizing the loss function $\\mathcal{L} + \\mathcal{L}_\\text{noise}$ with momentum SGD is equivalent to performing the SGHMC update step. \n\nBelow (Equation 3 and 4) are the posterior probabilities where each error term corresponds its negative log probability.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"!pip install tensorboard_logger",
"Collecting tensorboard_logger\n Using cached tensorboard_logger-0.0.4-py2.py3-none-any.whl\nRequirement already satisfied: six in /home/nbcommon/anaconda3_501/lib/python3.6/site-packages (from tensorboard_logger)\nRequirement already satisfied: protobuf in /home/nbcommon/anaconda3_501/lib/python3.6/site-packages (from tensorboard_logger)\nRequirement already satisfied: setuptools in /home/nbcommon/anaconda3_501/lib/python3.6/site-packages (from protobuf->tensorboard_logger)\nInstalling collected packages: tensorboard-logger\nSuccessfully installed tensorboard-logger-0.0.4\n"
],
[
"from __future__ import print_function\nimport os, pickle\nimport numpy as np\nimport random, math\nimport torch\nimport torch.nn as nn\nimport torch.nn.parallel\nimport torch.backends.cudnn as cudnn\nimport torch.optim as optim\nimport torch.utils.data\nimport torchvision.datasets as dset\nimport torchvision.transforms as transforms\nimport torchvision.utils as vutils\nfrom torch.autograd import Variable\nfrom statsutil import AverageMeter, accuracy\nfrom tensorboard_logger import configure, log_value",
"/home/nbuser/anaconda3_501/lib/python3.6/importlib/_bootstrap.py:205: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6\n return f(*args, **kwds)\n"
],
[
"# Default Parameters\nimport argparse\nparser = argparse.ArgumentParser()\nparser.add_argument('--dataset', default='cifar10')\nparser.add_argument('--imageSize', type=int, default=32)\nparser.add_argument('--batchSize', type=int, default=64, help='input batch size')\nparser.add_argument('--nz', type=int, default=100, help='size of the latent z vector')\nparser.add_argument('--niter', type=int, default=2, help='number of epochs to train for')\nparser.add_argument('--lr', type=float, default=0.0002, help='learning rate, default=0.0002')\nparser.add_argument('--cuda', type=int, default=1, help='enables cuda')\nparser.add_argument('--ngpu', type=int, default=1, help='number of GPUs to use')\nparser.add_argument('--outf', default='modelfiles/pytorch_demo3', help='folder to output images and model checkpoints')\nparser.add_argument('--numz', type=int, default=1, help='The number of set of z to marginalize over.')\nparser.add_argument('--num_mcmc', type=int, default=10, help='The number of MCMC chains to run in parallel')\nparser.add_argument('--num_semi', type=int, default=4000, help='The number of semi-supervised samples')\nparser.add_argument('--gnoise_alpha', type=float, default=0.0001, help='')\nparser.add_argument('--dnoise_alpha', type=float, default=0.0001, help='')\nparser.add_argument('--d_optim', type=str, default='adam', choices=['adam', 'sgd'], help='')\nparser.add_argument('--g_optim', type=str, default='adam', choices=['adam', 'sgd'], help='')\nparser.add_argument('--stats_interval', type=int, default=10, help='Calculate test accuracy every interval')\nparser.add_argument('--tensorboard', type=int, default=1, help='')\nparser.add_argument('--bayes', type=int, default=1, help='Do Bayesian GAN or normal GAN')\nimport sys; sys.argv=['']; del sys\nopt = parser.parse_args()\ntry:\n os.makedirs(opt.outf)\nexcept OSError:\n print(\"Error Making Directory\", opt.outf)\n pass\nif opt.tensorboard: configure(opt.outf)",
"Error Making Directory modelfiles/pytorch_demo3\n"
],
[
"# First, we construct the data loader for full training set \n# as well as the data loader of a partial training set for semi-supervised learning\n# transformation operator\nnormalize = transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))\ntransform_opt = transforms.Compose([\n transforms.ToTensor(),\n normalize,\n ])\n# get training set and test set\ndataset = dset.CIFAR10(root=os.environ['CIFAR10_PATH'], download=True,\n transform=transform_opt) \ndataloader = torch.utils.data.DataLoader(dataset, batch_size=opt.batchSize,\n shuffle=True, num_workers=1)\n\nfrom partial_dataset import PartialDataset\n# partial dataset for semi-supervised training\ndataset_partial = PartialDataset(dataset, opt.num_semi)\n\n\n# test set for evaluation\ndataset_test = dset.CIFAR10(root=os.environ['CIFAR10_PATH'],\n train=False,\n transform=transform_opt)\ndataloader_test = torch.utils.data.DataLoader(dataset_test,\n batch_size=opt.batchSize, shuffle=False, pin_memory=True, num_workers=1)\n\ndataloader_semi = torch.utils.data.DataLoader(dataset_partial, batch_size=opt.batchSize,\n shuffle=True, num_workers=1)",
"_____no_output_____"
],
[
"# Now we initialize the distributions of G and D\n##### Generator ######\n# opt.num_mcmc is the number of MCMC chains that we run in parallel\n# opt.numz is the number of noise batches that we use. We also use different parameter samples for different batches\n# we construct opt.numz * opt.num_mcmc initial generator parameters\n# We will keep sampling parameters from the posterior starting from this set\n# Keeping track of many MCMC chains can be done quite elegantly in Pytorch\nfrom models.discriminators import _netD\nfrom models.generators import _netG\nfrom statsutil import weights_init\nnetGs = []\nfor _idxz in range(opt.numz):\n for _idxm in range(opt.num_mcmc):\n netG = _netG(opt.ngpu, nz=opt.nz)\n netG.apply(weights_init)\n netGs.append(netG)\n##### Discriminator ######\n# We will use 1 chain of MCMCs for the discriminator\n# The number of classes for semi-supervised case is 11; that is,\n# index 0 for fake data and 1-10 for the 10 classes of CIFAR.\nnum_classes = 11\nnetD = _netD(opt.ngpu, num_classes=num_classes)",
"Reusing the Batch Norm Layers\nReusing the Batch Norm Layers\nReusing the Batch Norm Layers\nReusing the Batch Norm Layers\nReusing the Batch Norm Layers\nReusing the Batch Norm Layers\nReusing the Batch Norm Layers\nReusing the Batch Norm Layers\nReusing the Batch Norm Layers\nReusing the Batch Norm Layers\n"
],
[
"# In order to calculate errG or errD_real, we need to sum the probabilities over all the classes (1 to K)\n# ComplementCrossEntropyLoss is a loss function that performs this task\n# We can specify a default except_index that corresponds to a fake label. In this case, we use index=0\nfrom ComplementCrossEntropyLoss import ComplementCrossEntropyLoss\ncriterion = nn.CrossEntropyLoss()\n# use the default index = 0 - equivalent to summing all other probabilities\ncriterion_comp = ComplementCrossEntropyLoss(except_index=0)\n\n\nfrom models.distributions import Normal\nfrom models.bayes import NoiseLoss, PriorLoss\n# Finally, initialize the ``optimizers''\n# Since we keep track of a set of parameters, we also need a set of\n# ``optimizers''\nif opt.d_optim == 'adam':\n optimizerD = optim.Adam(netD.parameters(), lr=opt.lr, betas=(0.5, 0.999))\nelif opt.d_optim == 'sgd':\n optimizerD = torch.optim.SGD(netD.parameters(), lr=opt.lr,\n momentum=0.9,\n nesterov=True,\n weight_decay=1e-4)\noptimizerGs = []\nfor netG in netGs:\n optimizerG = optim.Adam(netG.parameters(), lr=opt.lr, betas=(0.5, 0.999))\n optimizerGs.append(optimizerG)\n\n# since the log posterior is the average per sample, we also scale down the prior and the noise\ngprior_criterion = PriorLoss(prior_std=1., observed=1000.)\ngnoise_criterion = NoiseLoss(params=netGs[0].parameters(), scale=math.sqrt(2*opt.gnoise_alpha/opt.lr), observed=1000.)\ndprior_criterion = PriorLoss(prior_std=1., observed=50000.)\ndnoise_criterion = NoiseLoss(params=netD.parameters(), scale=math.sqrt(2*opt.dnoise_alpha*opt.lr), observed=50000.)",
"_____no_output_____"
],
[
"# Fixed noise for data generation\nfixed_noise = torch.FloatTensor(opt.batchSize, opt.nz, 1, 1).normal_(0, 1).cuda()\nfixed_noise = Variable(fixed_noise)\n\n# initialize input variables and use CUDA (optional)\ninput = torch.FloatTensor(opt.batchSize, 3, opt.imageSize, opt.imageSize)\nnoise = torch.FloatTensor(opt.batchSize, opt.nz, 1, 1)\nlabel = torch.FloatTensor(opt.batchSize)\nreal_label = 1\nfake_label = 0\n\nif opt.cuda:\n netD.cuda()\n for netG in netGs:\n netG.cuda()\n criterion.cuda()\n criterion_comp.cuda()\n input, label = input.cuda(), label.cuda()\n noise = noise.cuda()",
"_____no_output_____"
],
[
"# fully supervised\nnetD_fullsup = _netD(opt.ngpu, num_classes=num_classes)\nnetD_fullsup.apply(weights_init)\ncriterion_fullsup = nn.CrossEntropyLoss()\nif opt.d_optim == 'adam':\n optimizerD_fullsup = optim.Adam(netD_fullsup.parameters(), lr=opt.lr, betas=(0.5, 0.999))\nelse:\n optimizerD_fullsup = optim.SGD(netD_fullsup.parameters(), lr=opt.lr,\n momentum=0.9,\n nesterov=True,\n weight_decay=1e-4)\nif opt.cuda:\n netD_fullsup.cuda()\n criterion_fullsup.cuda()",
"_____no_output_____"
],
[
"# We define a class to calculate the accuracy on test set\n# to test the performance of semi-supervised training\ndef get_test_accuracy(model_d, iteration, label='semi'):\n # don't forget to do model_d.eval() before doing evaluation\n top1 = AverageMeter()\n for i, (input, target) in enumerate(dataloader_test):\n target = target.cuda()\n input = input.cuda()\n input_var = torch.autograd.Variable(input.cuda(), volatile=True)\n target_var = torch.autograd.Variable(target, volatile=True)\n output = model_d(input_var)\n\n probs = output.data[:, 1:] # discard the zeroth index\n prec1 = accuracy(probs, target, topk=(1,))[0]\n top1.update(prec1[0], input.size(0))\n if i % 50 == 0:\n print(\"{} Test: [{}/{}]\\t Prec@1 {top1.val:.3f} ({top1.avg:.3f})\"\\\n .format(label, i, len(dataloader_test), top1=top1))\n print('{label} Test Prec@1 {top1.avg:.2f}'.format(label=label, top1=top1))\n log_value('test_acc_{}'.format(label), top1.avg, iteration)",
"_____no_output_____"
],
[
"iteration = 0\nfor epoch in range(opt.niter):\n top1 = AverageMeter()\n top1_weakD = AverageMeter()\n for i, data in enumerate(dataloader):\n iteration += 1\n #######\n # 1. real input\n netD.zero_grad()\n _input, _ = data\n batch_size = _input.size(0)\n if opt.cuda:\n _input = _input.cuda()\n input.resize_as_(_input).copy_(_input) \n label.resize_(batch_size).fill_(real_label) \n inputv = Variable(input)\n labelv = Variable(label)\n \n output = netD(inputv)\n errD_real = criterion_comp(output)\n errD_real.backward()\n # calculate D_x, the probability that real data are classified \n D_x = 1 - torch.nn.functional.softmax(output).data[:, 0].mean()\n \n #######\n # 2. Generated input\n fakes = []\n for _idxz in range(opt.numz):\n noise.resize_(batch_size, opt.nz, 1, 1).normal_(0, 1)\n noisev = Variable(noise)\n for _idxm in range(opt.num_mcmc):\n idx = _idxz*opt.num_mcmc + _idxm\n netG = netGs[idx]\n _fake = netG(noisev)\n fakes.append(_fake)\n fake = torch.cat(fakes)\n output = netD(fake.detach())\n labelv = Variable(torch.LongTensor(fake.data.shape[0]).cuda().fill_(fake_label))\n errD_fake = criterion(output, labelv)\n errD_fake.backward()\n \n D_G_z1 = 1 - torch.nn.functional.softmax(output).data[:, 0].mean()\n \n #######\n # 3. Labeled Data Part (for semi-supervised learning)\n for ii, (input_sup, target_sup) in enumerate(dataloader_semi):\n input_sup, target_sup = input_sup.cuda(), target_sup.cuda()\n break\n input_sup_v = Variable(input_sup.cuda())\n # convert target indicies from 0 to 9 to 1 to 10\n target_sup_v = Variable( (target_sup + 1).cuda())\n output_sup = netD(input_sup_v)\n err_sup = criterion(output_sup, target_sup_v)\n err_sup.backward()\n prec1 = accuracy(output_sup.data, target_sup + 1, topk=(1,))[0]\n top1.update(prec1[0], input_sup.size(0))\n if opt.bayes:\n errD_prior = dprior_criterion(netD.parameters())\n errD_prior.backward()\n errD_noise = dnoise_criterion(netD.parameters())\n errD_noise.backward()\n errD = errD_real + errD_fake + err_sup + errD_prior + errD_noise\n else:\n errD = errD_real + errD_fake + err_sup\n optimizerD.step()\n \n # 4. Generator\n for netG in netGs:\n netG.zero_grad()\n labelv = Variable(torch.FloatTensor(fake.data.shape[0]).cuda().fill_(real_label))\n output = netD(fake)\n errG = criterion_comp(output)\n if opt.bayes:\n for netG in netGs:\n errG += gprior_criterion(netG.parameters())\n errG += gnoise_criterion(netG.parameters())\n errG.backward()\n D_G_z2 = 1 - torch.nn.functional.softmax(output).data[:, 0].mean()\n for optimizerG in optimizerGs:\n optimizerG.step()\n \n # 5. Fully supervised training (running in parallel for comparison)\n netD_fullsup.zero_grad()\n input_fullsup = Variable(input_sup)\n target_fullsup = Variable((target_sup + 1))\n output_fullsup = netD_fullsup(input_fullsup)\n err_fullsup = criterion_fullsup(output_fullsup, target_fullsup)\n optimizerD_fullsup.zero_grad()\n err_fullsup.backward()\n optimizerD_fullsup.step()\n \n # 6. get test accuracy after every interval\n if iteration % opt.stats_interval == 0:\n # get test accuracy on train and test\n netD.eval()\n get_test_accuracy(netD, iteration, label='semi')\n get_test_accuracy(netD_fullsup, iteration, label='sup')\n netD.train()\n \n # 7. Report for this iteration\n cur_val, ave_val = top1.val, top1.avg\n log_value('train_acc', top1.avg, iteration)\n print('[%d/%d][%d/%d] Loss_D: %.2f Loss_G: %.2f D(x): %.2f D(G(z)): %.2f / %.2f | Acc %.1f / %.1f'\n % (epoch, opt.niter, i, len(dataloader),\n errD.data[0], errG.data[0], D_x, D_G_z1, D_G_z2, cur_val, ave_val))\n # after each epoch, save images\n vutils.save_image(_input,\n '%s/real_samples.png' % opt.outf,\n normalize=True)\n for _zid in range(opt.numz):\n for _mid in range(opt.num_mcmc):\n idx = _zid*opt.num_mcmc + _mid\n netG = netGs[idx]\n fake = netG(fixed_noise)\n vutils.save_image(fake.data,\n '%s/fake_samples_epoch_%03d_G_z%02d_m%02d.png' % (opt.outf, epoch, _zid, _mid),\n normalize=True)\n for ii, netG in enumerate(netGs):\n torch.save(netG.state_dict(), '%s/netG%d_epoch_%d.pth' % (opt.outf, ii, epoch))\n torch.save(netD.state_dict(), '%s/netD_epoch_%d.pth' % (opt.outf, epoch))\n torch.save(netD_fullsup.state_dict(), '%s/netD_fullsup_epoch_%d.pth' % (opt.outf, epoch))",
"[0/2][0/782] Loss_D: 5.13 Loss_G: 22.97 D(x): 0.92 D(G(z)): 0.92 / 0.74 | Acc 7.8 / 7.8\n[0/2][1/782] Loss_D: 4.21 Loss_G: 23.28 D(x): 0.78 D(G(z)): 0.76 / 0.54 | Acc 0.0 / 3.9\n[0/2][2/782] Loss_D: 3.91 Loss_G: 23.61 D(x): 0.66 D(G(z)): 0.57 / 0.49 | Acc 1.6 / 3.1\n[0/2][3/782] Loss_D: 3.86 Loss_G: 23.22 D(x): 0.67 D(G(z)): 0.53 / 0.53 | Acc 3.1 / 3.1\n[0/2][4/782] Loss_D: 3.62 Loss_G: 23.17 D(x): 0.75 D(G(z)): 0.58 / 0.55 | Acc 4.7 / 3.4\n[0/2][5/782] Loss_D: 3.62 Loss_G: 23.09 D(x): 0.78 D(G(z)): 0.61 / 0.51 | Acc 6.2 / 3.9\n[0/2][6/782] Loss_D: 3.47 Loss_G: 23.12 D(x): 0.74 D(G(z)): 0.58 / 0.48 | Acc 17.2 / 5.8\n[0/2][7/782] Loss_D: 3.36 Loss_G: 23.28 D(x): 0.77 D(G(z)): 0.56 / 0.43 | Acc 14.1 / 6.8\n[0/2][8/782] Loss_D: 3.43 Loss_G: 23.49 D(x): 0.74 D(G(z)): 0.52 / 0.38 | Acc 14.1 / 7.6\nsemi Test: [0/157]\t Prec@1 34.375 (34.375)\nsemi Test: [50/157]\t Prec@1 34.375 (31.801)\nsemi Test: [100/157]\t Prec@1 25.000 (31.946)\nsemi Test: [150/157]\t Prec@1 31.250 (31.767)\nsemi Test Prec@1 31.68\nsup Test: [0/157]\t Prec@1 23.438 (23.438)\nsup Test: [50/157]\t Prec@1 29.688 (29.779)\nsup Test: [100/157]\t Prec@1 26.562 (30.538)\nsup Test: [150/157]\t Prec@1 25.000 (30.288)\nsup Test Prec@1 30.23\n[0/2][9/782] Loss_D: 3.35 Loss_G: 23.30 D(x): 0.71 D(G(z)): 0.47 / 0.40 | Acc 9.4 / 7.8\n[0/2][10/782] Loss_D: 3.12 Loss_G: 23.08 D(x): 0.79 D(G(z)): 0.50 / 0.38 | Acc 15.6 / 8.5\n[0/2][11/782] Loss_D: 2.99 Loss_G: 23.18 D(x): 0.77 D(G(z)): 0.49 / 0.37 | Acc 31.2 / 10.4\n[0/2][12/782] Loss_D: 2.85 Loss_G: 23.45 D(x): 0.79 D(G(z)): 0.48 / 0.33 | Acc 26.6 / 11.7\n[0/2][13/782] Loss_D: 3.13 Loss_G: 23.30 D(x): 0.76 D(G(z)): 0.46 / 0.33 | Acc 15.6 / 11.9\n[0/2][14/782] Loss_D: 3.32 Loss_G: 23.48 D(x): 0.76 D(G(z)): 0.47 / 0.29 | Acc 14.1 / 12.1\n[0/2][15/782] Loss_D: 3.14 Loss_G: 23.24 D(x): 0.78 D(G(z)): 0.42 / 0.27 | Acc 14.1 / 12.2\n[0/2][16/782] Loss_D: 3.16 Loss_G: 23.23 D(x): 0.81 D(G(z)): 0.41 / 0.27 | Acc 14.1 / 12.3\n[0/2][17/782] Loss_D: 2.70 Loss_G: 23.46 D(x): 0.83 D(G(z)): 0.41 / 0.25 | Acc 31.2 / 13.4\n[0/2][18/782] Loss_D: 2.90 Loss_G: 23.47 D(x): 0.79 D(G(z)): 0.40 / 0.25 | Acc 17.2 / 13.6\nsemi Test: [0/157]\t Prec@1 35.938 (35.938)\nsemi Test: [50/157]\t Prec@1 34.375 (32.200)\nsemi Test: [100/157]\t Prec@1 29.688 (32.240)\nsemi Test: [150/157]\t Prec@1 31.250 (32.202)\nsemi Test Prec@1 32.07\nsup Test: [0/157]\t Prec@1 34.375 (34.375)\nsup Test: [50/157]\t Prec@1 34.375 (32.874)\nsup Test: [100/157]\t Prec@1 29.688 (33.277)\nsup Test: [150/157]\t Prec@1 34.375 (33.361)\nsup Test Prec@1 33.18\n[0/2][19/782] Loss_D: 2.82 Loss_G: 23.35 D(x): 0.82 D(G(z)): 0.40 / 0.23 | Acc 21.9 / 14.0\n[0/2][20/782] Loss_D: 2.71 Loss_G: 23.73 D(x): 0.84 D(G(z)): 0.38 / 0.21 | Acc 28.1 / 14.7\n[0/2][21/782] Loss_D: 2.51 Loss_G: 23.60 D(x): 0.84 D(G(z)): 0.37 / 0.18 | Acc 32.8 / 15.5\n[0/2][22/782] Loss_D: 2.50 Loss_G: 23.64 D(x): 0.80 D(G(z)): 0.34 / 0.17 | Acc 40.6 / 16.6\n[0/2][23/782] Loss_D: 2.78 Loss_G: 23.66 D(x): 0.81 D(G(z)): 0.34 / 0.18 | Acc 28.1 / 17.1\n[0/2][24/782] Loss_D: 2.73 Loss_G: 23.67 D(x): 0.80 D(G(z)): 0.39 / 0.16 | Acc 23.4 / 17.3\n[0/2][25/782] Loss_D: 2.85 Loss_G: 23.72 D(x): 0.77 D(G(z)): 0.32 / 0.21 | Acc 20.3 / 17.4\n[0/2][26/782] Loss_D: 2.70 Loss_G: 23.44 D(x): 0.81 D(G(z)): 0.40 / 0.20 | Acc 23.4 / 17.7\n[0/2][27/782] Loss_D: 2.64 Loss_G: 23.52 D(x): 0.84 D(G(z)): 0.35 / 0.15 | Acc 31.2 / 18.1\n[0/2][28/782] Loss_D: 2.62 Loss_G: 24.19 D(x): 0.84 D(G(z)): 0.29 / 0.11 | Acc 23.4 / 18.3\nsemi Test: [0/157]\t Prec@1 29.688 (29.688)\nsemi Test: [50/157]\t Prec@1 26.562 (32.782)\nsemi Test: [100/157]\t Prec@1 28.125 (32.503)\nsemi Test: [150/157]\t Prec@1 32.812 (32.595)\nsemi Test Prec@1 32.52\nsup Test: [0/157]\t Prec@1 32.812 (32.812)\nsup Test: [50/157]\t Prec@1 37.500 (34.589)\nsup Test: [100/157]\t Prec@1 26.562 (34.545)\nsup Test: [150/157]\t Prec@1 39.062 (34.416)\nsup Test Prec@1 34.42\n[0/2][29/782] Loss_D: 2.50 Loss_G: 23.44 D(x): 0.77 D(G(z)): 0.23 / 0.18 | Acc 26.6 / 18.6\n[0/2][30/782] Loss_D: 2.58 Loss_G: 23.89 D(x): 0.89 D(G(z)): 0.39 / 0.13 | Acc 29.7 / 19.0\n[0/2][31/782] Loss_D: 2.53 Loss_G: 23.88 D(x): 0.82 D(G(z)): 0.27 / 0.13 | Acc 34.4 / 19.4\n[0/2][32/782] Loss_D: 2.42 Loss_G: 23.86 D(x): 0.88 D(G(z)): 0.29 / 0.12 | Acc 32.8 / 19.8\n[0/2][33/782] Loss_D: 2.41 Loss_G: 23.52 D(x): 0.84 D(G(z)): 0.29 / 0.12 | Acc 32.8 / 20.2\n[0/2][34/782] Loss_D: 2.40 Loss_G: 23.61 D(x): 0.85 D(G(z)): 0.28 / 0.12 | Acc 31.2 / 20.5\n[0/2][35/782] Loss_D: 2.63 Loss_G: 23.76 D(x): 0.86 D(G(z)): 0.29 / 0.10 | Acc 28.1 / 20.7\n[0/2][36/782] Loss_D: 2.25 Loss_G: 23.88 D(x): 0.89 D(G(z)): 0.27 / 0.09 | Acc 34.4 / 21.1\n[0/2][37/782] Loss_D: 2.43 Loss_G: 23.87 D(x): 0.81 D(G(z)): 0.23 / 0.10 | Acc 40.6 / 21.6\n[0/2][38/782] Loss_D: 2.19 Loss_G: 23.69 D(x): 0.89 D(G(z)): 0.25 / 0.10 | Acc 34.4 / 22.0\nsemi Test: [0/157]\t Prec@1 28.125 (28.125)\nsemi Test: [50/157]\t Prec@1 31.250 (36.336)\nsemi Test: [100/157]\t Prec@1 26.562 (36.525)\nsemi Test: [150/157]\t Prec@1 40.625 (36.362)\nsemi Test Prec@1 36.18\nsup Test: [0/157]\t Prec@1 28.125 (28.125)\nsup Test: [50/157]\t Prec@1 43.750 (35.325)\nsup Test: [100/157]\t Prec@1 35.938 (35.953)\nsup Test: [150/157]\t Prec@1 37.500 (35.751)\nsup Test Prec@1 35.67\n[0/2][39/782] Loss_D: 2.10 Loss_G: 24.08 D(x): 0.91 D(G(z)): 0.25 / 0.07 | Acc 43.8 / 22.5\n[0/2][40/782] Loss_D: 2.19 Loss_G: 23.19 D(x): 0.81 D(G(z)): 0.18 / 0.13 | Acc 37.5 / 22.9\n[0/2][41/782] Loss_D: 2.15 Loss_G: 24.32 D(x): 0.92 D(G(z)): 0.31 / 0.08 | Acc 46.9 / 23.4\n[0/2][42/782] Loss_D: 2.30 Loss_G: 23.54 D(x): 0.82 D(G(z)): 0.18 / 0.10 | Acc 43.8 / 23.9\n[0/2][43/782] Loss_D: 2.02 Loss_G: 23.73 D(x): 0.86 D(G(z)): 0.26 / 0.08 | Acc 53.1 / 24.6\n[0/2][44/782] Loss_D: 2.13 Loss_G: 24.02 D(x): 0.90 D(G(z)): 0.21 / 0.05 | Acc 45.3 / 25.0\n[0/2][45/782] Loss_D: 2.05 Loss_G: 23.77 D(x): 0.91 D(G(z)): 0.13 / 0.08 | Acc 34.4 / 25.2\n[0/2][46/782] Loss_D: 2.09 Loss_G: 23.93 D(x): 0.91 D(G(z)): 0.25 / 0.07 | Acc 34.4 / 25.4\n[0/2][47/782] Loss_D: 2.08 Loss_G: 23.50 D(x): 0.89 D(G(z)): 0.19 / 0.11 | Acc 45.3 / 25.8\n[0/2][48/782] Loss_D: 2.07 Loss_G: 23.93 D(x): 0.93 D(G(z)): 0.24 / 0.09 | Acc 42.2 / 26.2\nsemi Test: [0/157]\t Prec@1 35.938 (35.938)\nsemi Test: [50/157]\t Prec@1 32.812 (37.132)\nsemi Test: [100/157]\t Prec@1 28.125 (37.005)\nsemi Test: [150/157]\t Prec@1 39.062 (36.714)\nsemi Test Prec@1 36.47\nsup Test: [0/157]\t Prec@1 35.938 (35.938)\nsup Test: [50/157]\t Prec@1 39.062 (37.990)\nsup Test: [100/157]\t Prec@1 40.625 (38.676)\nsup Test: [150/157]\t Prec@1 42.188 (38.162)\nsup Test Prec@1 37.99\n[0/2][49/782] Loss_D: 2.37 Loss_G: 23.44 D(x): 0.80 D(G(z)): 0.19 / 0.22 | Acc 34.4 / 26.3\n[0/2][50/782] Loss_D: 2.43 Loss_G: 23.42 D(x): 0.87 D(G(z)): 0.33 / 0.12 | Acc 37.5 / 26.6\n[0/2][51/782] Loss_D: 2.19 Loss_G: 24.20 D(x): 0.91 D(G(z)): 0.27 / 0.03 | Acc 34.4 / 26.7\n[0/2][52/782] Loss_D: 2.02 Loss_G: 23.16 D(x): 0.82 D(G(z)): 0.07 / 0.10 | Acc 42.2 / 27.0\n[0/2][53/782] Loss_D: 1.94 Loss_G: 23.66 D(x): 0.95 D(G(z)): 0.26 / 0.08 | Acc 46.9 / 27.4\n[0/2][54/782] Loss_D: 1.99 Loss_G: 23.55 D(x): 0.90 D(G(z)): 0.20 / 0.06 | Acc 45.3 / 27.7\n[0/2][55/782] Loss_D: 2.07 Loss_G: 23.41 D(x): 0.87 D(G(z)): 0.17 / 0.09 | Acc 37.5 / 27.9\n[0/2][56/782] Loss_D: 2.02 Loss_G: 22.93 D(x): 0.87 D(G(z)): 0.23 / 0.09 | Acc 42.2 / 28.1\n[0/2][57/782] Loss_D: 2.04 Loss_G: 23.53 D(x): 0.90 D(G(z)): 0.27 / 0.05 | Acc 43.8 / 28.4\n[0/2][58/782] Loss_D: 2.01 Loss_G: 22.68 D(x): 0.84 D(G(z)): 0.12 / 0.15 | Acc 39.1 / 28.6\nsemi Test: [0/157]\t Prec@1 35.938 (35.938)\nsemi Test: [50/157]\t Prec@1 42.188 (38.848)\nsemi Test: [100/157]\t Prec@1 32.812 (38.181)\nsemi Test: [150/157]\t Prec@1 43.750 (37.841)\nsemi Test Prec@1 37.60\nsup Test: [0/157]\t Prec@1 45.312 (45.312)\nsup Test: [50/157]\t Prec@1 46.875 (41.759)\nsup Test: [100/157]\t Prec@1 32.812 (41.166)\nsup Test: [150/157]\t Prec@1 40.625 (40.056)\nsup Test Prec@1 39.87\n[0/2][59/782] Loss_D: 2.06 Loss_G: 23.68 D(x): 0.94 D(G(z)): 0.38 / 0.05 | Acc 57.8 / 29.1\n[0/2][60/782] Loss_D: 2.39 Loss_G: 21.90 D(x): 0.72 D(G(z)): 0.14 / 0.27 | Acc 26.6 / 29.0\n[0/2][61/782] Loss_D: 2.33 Loss_G: 23.12 D(x): 0.93 D(G(z)): 0.48 / 0.08 | Acc 48.4 / 29.3\n[0/2][62/782] Loss_D: 2.36 Loss_G: 22.51 D(x): 0.82 D(G(z)): 0.18 / 0.13 | Acc 31.2 / 29.4\n[0/2][63/782] Loss_D: 2.17 Loss_G: 22.25 D(x): 0.81 D(G(z)): 0.30 / 0.18 | Acc 46.9 / 29.6\n[0/2][64/782] Loss_D: 2.28 Loss_G: 22.32 D(x): 0.85 D(G(z)): 0.34 / 0.13 | Acc 46.9 / 29.9\n[0/2][65/782] Loss_D: 2.36 Loss_G: 22.81 D(x): 0.88 D(G(z)): 0.31 / 0.09 | Acc 39.1 / 30.0\n[0/2][66/782] Loss_D: 2.18 Loss_G: 22.51 D(x): 0.84 D(G(z)): 0.22 / 0.19 | Acc 35.9 / 30.1\n[0/2][67/782] Loss_D: 2.57 Loss_G: 23.22 D(x): 0.89 D(G(z)): 0.45 / 0.13 | Acc 39.1 / 30.3\n[0/2][68/782] Loss_D: 2.64 Loss_G: 21.93 D(x): 0.73 D(G(z)): 0.23 / 0.31 | Acc 31.2 / 30.3\nsemi Test: [0/157]\t Prec@1 32.812 (32.812)\nsemi Test: [50/157]\t Prec@1 43.750 (40.104)\nsemi Test: [100/157]\t Prec@1 35.938 (39.991)\nsemi Test: [150/157]\t Prec@1 37.500 (39.342)\nsemi Test Prec@1 39.16\nsup Test: [0/157]\t Prec@1 28.125 (28.125)\nsup Test: [50/157]\t Prec@1 40.625 (39.308)\nsup Test: [100/157]\t Prec@1 29.688 (39.155)\nsup Test: [150/157]\t Prec@1 46.875 (38.907)\nsup Test Prec@1 38.78\n[0/2][69/782] Loss_D: 2.75 Loss_G: 22.70 D(x): 0.91 D(G(z)): 0.48 / 0.09 | Acc 32.8 / 30.3\n[0/2][70/782] Loss_D: 2.27 Loss_G: 21.94 D(x): 0.82 D(G(z)): 0.18 / 0.20 | Acc 28.1 / 30.3\n[0/2][71/782] Loss_D: 2.21 Loss_G: 23.07 D(x): 0.94 D(G(z)): 0.37 / 0.07 | Acc 45.3 / 30.5\n[0/2][72/782] Loss_D: 2.23 Loss_G: 21.79 D(x): 0.84 D(G(z)): 0.18 / 0.19 | Acc 34.4 / 30.5\n[0/2][73/782] Loss_D: 2.25 Loss_G: 23.30 D(x): 0.89 D(G(z)): 0.44 / 0.04 | Acc 54.7 / 30.9\n[0/2][74/782] Loss_D: 2.25 Loss_G: 21.04 D(x): 0.76 D(G(z)): 0.15 / 0.36 | Acc 28.1 / 30.8\n[0/2][75/782] Loss_D: 2.91 Loss_G: 24.18 D(x): 0.94 D(G(z)): 0.70 / 0.02 | Acc 42.2 / 31.0\n[0/2][76/782] Loss_D: 3.00 Loss_G: 21.05 D(x): 0.54 D(G(z)): 0.05 / 0.36 | Acc 12.5 / 30.7\n[0/2][77/782] Loss_D: 2.80 Loss_G: 22.08 D(x): 0.97 D(G(z)): 0.64 / 0.11 | Acc 40.6 / 30.9\n[0/2][78/782] Loss_D: 2.04 Loss_G: 22.59 D(x): 0.88 D(G(z)): 0.26 / 0.06 | Acc 51.6 / 31.1\nsemi Test: [0/157]\t Prec@1 31.250 (31.250)\nsemi Test: [50/157]\t Prec@1 40.625 (38.971)\nsemi Test: [100/157]\t Prec@1 31.250 (38.552)\nsemi Test: [150/157]\t Prec@1 40.625 (38.380)\nsemi Test Prec@1 38.17\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 39.062 (43.781)\nsup Test: [100/157]\t Prec@1 34.375 (42.358)\nsup Test: [150/157]\t Prec@1 45.312 (41.898)\nsup Test Prec@1 41.78\n[0/2][79/782] Loss_D: 2.27 Loss_G: 21.87 D(x): 0.84 D(G(z)): 0.15 / 0.15 | Acc 35.9 / 31.2\n[0/2][80/782] Loss_D: 2.23 Loss_G: 21.83 D(x): 0.88 D(G(z)): 0.35 / 0.13 | Acc 50.0 / 31.4\n[0/2][81/782] Loss_D: 2.09 Loss_G: 22.62 D(x): 0.92 D(G(z)): 0.32 / 0.06 | Acc 50.0 / 31.7\n[0/2][82/782] Loss_D: 2.16 Loss_G: 21.26 D(x): 0.75 D(G(z)): 0.15 / 0.25 | Acc 35.9 / 31.7\n[0/2][83/782] Loss_D: 2.33 Loss_G: 23.48 D(x): 0.95 D(G(z)): 0.54 / 0.03 | Acc 51.6 / 31.9\n[0/2][84/782] Loss_D: 2.34 Loss_G: 21.13 D(x): 0.65 D(G(z)): 0.07 / 0.26 | Acc 35.9 / 32.0\n[0/2][85/782] Loss_D: 2.53 Loss_G: 22.35 D(x): 0.96 D(G(z)): 0.53 / 0.08 | Acc 46.9 / 32.2\n[0/2][86/782] Loss_D: 1.86 Loss_G: 22.72 D(x): 0.91 D(G(z)): 0.18 / 0.05 | Acc 45.3 / 32.3\n[0/2][87/782] Loss_D: 1.84 Loss_G: 21.95 D(x): 0.87 D(G(z)): 0.14 / 0.12 | Acc 46.9 / 32.5\n[0/2][88/782] Loss_D: 2.04 Loss_G: 22.21 D(x): 0.93 D(G(z)): 0.33 / 0.07 | Acc 45.3 / 32.6\nsemi Test: [0/157]\t Prec@1 31.250 (31.250)\nsemi Test: [50/157]\t Prec@1 45.312 (40.625)\nsemi Test: [100/157]\t Prec@1 31.250 (40.826)\nsemi Test: [150/157]\t Prec@1 39.062 (40.728)\nsemi Test Prec@1 40.55\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 42.188 (42.433)\nsup Test: [100/157]\t Prec@1 40.625 (42.064)\nsup Test: [150/157]\t Prec@1 46.875 (41.743)\nsup Test Prec@1 41.65\n[0/2][89/782] Loss_D: 2.32 Loss_G: 21.35 D(x): 0.82 D(G(z)): 0.20 / 0.16 | Acc 40.6 / 32.7\n[0/2][90/782] Loss_D: 2.50 Loss_G: 22.94 D(x): 0.93 D(G(z)): 0.44 / 0.05 | Acc 39.1 / 32.8\n[0/2][91/782] Loss_D: 2.23 Loss_G: 21.30 D(x): 0.76 D(G(z)): 0.13 / 0.22 | Acc 35.9 / 32.8\n[0/2][92/782] Loss_D: 2.28 Loss_G: 22.28 D(x): 0.92 D(G(z)): 0.46 / 0.10 | Acc 46.9 / 33.0\n[0/2][93/782] Loss_D: 2.11 Loss_G: 22.24 D(x): 0.84 D(G(z)): 0.21 / 0.11 | Acc 46.9 / 33.1\n[0/2][94/782] Loss_D: 2.06 Loss_G: 21.34 D(x): 0.85 D(G(z)): 0.22 / 0.23 | Acc 45.3 / 33.2\n[0/2][95/782] Loss_D: 2.49 Loss_G: 22.41 D(x): 0.92 D(G(z)): 0.39 / 0.06 | Acc 37.5 / 33.3\n[0/2][96/782] Loss_D: 2.16 Loss_G: 21.56 D(x): 0.81 D(G(z)): 0.13 / 0.15 | Acc 42.2 / 33.4\n[0/2][97/782] Loss_D: 2.09 Loss_G: 22.44 D(x): 0.94 D(G(z)): 0.36 / 0.09 | Acc 54.7 / 33.6\n[0/2][98/782] Loss_D: 2.29 Loss_G: 21.44 D(x): 0.87 D(G(z)): 0.20 / 0.13 | Acc 35.9 / 33.6\nsemi Test: [0/157]\t Prec@1 37.500 (37.500)\nsemi Test: [50/157]\t Prec@1 48.438 (41.299)\nsemi Test: [100/157]\t Prec@1 39.062 (41.615)\nsemi Test: [150/157]\t Prec@1 37.500 (41.287)\nsemi Test Prec@1 41.25\nsup Test: [0/157]\t Prec@1 35.938 (35.938)\nsup Test: [50/157]\t Prec@1 39.062 (40.931)\nsup Test: [100/157]\t Prec@1 50.000 (41.337)\nsup Test: [150/157]\t Prec@1 43.750 (41.142)\nsup Test Prec@1 41.03\n[0/2][99/782] Loss_D: 2.19 Loss_G: 21.78 D(x): 0.89 D(G(z)): 0.31 / 0.09 | Acc 51.6 / 33.8\n[0/2][100/782] Loss_D: 2.04 Loss_G: 21.33 D(x): 0.87 D(G(z)): 0.23 / 0.14 | Acc 45.3 / 33.9\n[0/2][101/782] Loss_D: 2.10 Loss_G: 21.79 D(x): 0.88 D(G(z)): 0.31 / 0.11 | Acc 46.9 / 34.0\n[0/2][102/782] Loss_D: 2.35 Loss_G: 20.93 D(x): 0.81 D(G(z)): 0.29 / 0.29 | Acc 34.4 / 34.0\n[0/2][103/782] Loss_D: 2.46 Loss_G: 22.25 D(x): 0.87 D(G(z)): 0.47 / 0.06 | Acc 50.0 / 34.2\n[0/2][104/782] Loss_D: 2.26 Loss_G: 20.28 D(x): 0.68 D(G(z)): 0.15 / 0.33 | Acc 31.2 / 34.2\n[0/2][105/782] Loss_D: 2.64 Loss_G: 22.34 D(x): 0.95 D(G(z)): 0.61 / 0.06 | Acc 50.0 / 34.3\n[0/2][106/782] Loss_D: 2.44 Loss_G: 20.66 D(x): 0.66 D(G(z)): 0.12 / 0.33 | Acc 39.1 / 34.4\n[0/2][107/782] Loss_D: 2.40 Loss_G: 21.31 D(x): 0.93 D(G(z)): 0.51 / 0.11 | Acc 54.7 / 34.5\n[0/2][108/782] Loss_D: 2.05 Loss_G: 21.62 D(x): 0.88 D(G(z)): 0.28 / 0.09 | Acc 40.6 / 34.6\nsemi Test: [0/157]\t Prec@1 40.625 (40.625)\nsemi Test: [50/157]\t Prec@1 42.188 (42.341)\nsemi Test: [100/157]\t Prec@1 37.500 (42.358)\nsemi Test: [150/157]\t Prec@1 46.875 (42.353)\nsemi Test Prec@1 42.13\nsup Test: [0/157]\t Prec@1 29.688 (29.688)\nsup Test: [50/157]\t Prec@1 42.188 (41.605)\nsup Test: [100/157]\t Prec@1 37.500 (41.708)\nsup Test: [150/157]\t Prec@1 42.188 (41.101)\nsup Test Prec@1 41.07\n[0/2][109/782] Loss_D: 2.18 Loss_G: 20.65 D(x): 0.76 D(G(z)): 0.22 / 0.23 | Acc 43.8 / 34.7\n[0/2][110/782] Loss_D: 2.29 Loss_G: 21.81 D(x): 0.89 D(G(z)): 0.51 / 0.08 | Acc 50.0 / 34.8\n[0/2][111/782] Loss_D: 2.13 Loss_G: 20.96 D(x): 0.80 D(G(z)): 0.21 / 0.14 | Acc 43.8 / 34.9\n[0/2][112/782] Loss_D: 1.99 Loss_G: 20.74 D(x): 0.84 D(G(z)): 0.32 / 0.17 | Acc 53.1 / 35.1\n[0/2][113/782] Loss_D: 2.23 Loss_G: 21.59 D(x): 0.89 D(G(z)): 0.45 / 0.07 | Acc 56.2 / 35.3\n[0/2][114/782] Loss_D: 2.30 Loss_G: 20.27 D(x): 0.76 D(G(z)): 0.20 / 0.23 | Acc 28.1 / 35.2\n[0/2][115/782] Loss_D: 2.31 Loss_G: 22.04 D(x): 0.91 D(G(z)): 0.52 / 0.06 | Acc 51.6 / 35.3\n[0/2][116/782] Loss_D: 2.65 Loss_G: 18.96 D(x): 0.57 D(G(z)): 0.14 / 0.66 | Acc 35.9 / 35.3\n[0/2][117/782] Loss_D: 3.16 Loss_G: 21.48 D(x): 0.97 D(G(z)): 0.81 / 0.07 | Acc 64.1 / 35.6\n[0/2][118/782] Loss_D: 2.23 Loss_G: 21.06 D(x): 0.77 D(G(z)): 0.14 / 0.12 | Acc 40.6 / 35.6\nsemi Test: [0/157]\t Prec@1 43.750 (43.750)\nsemi Test: [50/157]\t Prec@1 51.562 (43.566)\nsemi Test: [100/157]\t Prec@1 40.625 (43.595)\nsemi Test: [150/157]\t Prec@1 39.062 (43.460)\nsemi Test Prec@1 43.25\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 45.312 (44.148)\nsup Test: [100/157]\t Prec@1 34.375 (44.230)\nsup Test: [150/157]\t Prec@1 45.312 (43.895)\nsup Test Prec@1 43.86\n[0/2][119/782] Loss_D: 1.97 Loss_G: 20.01 D(x): 0.77 D(G(z)): 0.23 / 0.25 | Acc 50.0 / 35.7\n[0/2][120/782] Loss_D: 2.37 Loss_G: 20.73 D(x): 0.92 D(G(z)): 0.46 / 0.14 | Acc 48.4 / 35.8\n[0/2][121/782] Loss_D: 2.12 Loss_G: 20.26 D(x): 0.79 D(G(z)): 0.28 / 0.22 | Acc 45.3 / 35.9\n[0/2][122/782] Loss_D: 2.36 Loss_G: 20.85 D(x): 0.91 D(G(z)): 0.44 / 0.10 | Acc 51.6 / 36.1\n[0/2][123/782] Loss_D: 2.44 Loss_G: 19.45 D(x): 0.75 D(G(z)): 0.23 / 0.33 | Acc 29.7 / 36.0\n[0/2][124/782] Loss_D: 2.35 Loss_G: 20.89 D(x): 0.90 D(G(z)): 0.57 / 0.09 | Acc 57.8 / 36.2\n[0/2][125/782] Loss_D: 2.26 Loss_G: 20.15 D(x): 0.78 D(G(z)): 0.19 / 0.21 | Acc 37.5 / 36.2\n[0/2][126/782] Loss_D: 2.38 Loss_G: 20.64 D(x): 0.86 D(G(z)): 0.36 / 0.15 | Acc 40.6 / 36.2\n[0/2][127/782] Loss_D: 2.01 Loss_G: 20.23 D(x): 0.81 D(G(z)): 0.28 / 0.16 | Acc 57.8 / 36.4\n[0/2][128/782] Loss_D: 2.14 Loss_G: 20.32 D(x): 0.86 D(G(z)): 0.33 / 0.13 | Acc 48.4 / 36.5\nsemi Test: [0/157]\t Prec@1 45.312 (45.312)\nsemi Test: [50/157]\t Prec@1 48.438 (43.290)\nsemi Test: [100/157]\t Prec@1 42.188 (43.781)\nsemi Test: [150/157]\t Prec@1 40.625 (43.709)\nsemi Test Prec@1 43.51\nsup Test: [0/157]\t Prec@1 45.312 (45.312)\nsup Test: [50/157]\t Prec@1 40.625 (44.179)\nsup Test: [100/157]\t Prec@1 42.188 (43.595)\nsup Test: [150/157]\t Prec@1 48.438 (43.171)\nsup Test Prec@1 43.12\n[0/2][129/782] Loss_D: 2.06 Loss_G: 20.19 D(x): 0.85 D(G(z)): 0.28 / 0.17 | Acc 43.8 / 36.5\n[0/2][130/782] Loss_D: 2.16 Loss_G: 20.01 D(x): 0.83 D(G(z)): 0.34 / 0.20 | Acc 48.4 / 36.6\n[0/2][131/782] Loss_D: 1.99 Loss_G: 20.86 D(x): 0.86 D(G(z)): 0.37 / 0.12 | Acc 59.4 / 36.8\n[0/2][132/782] Loss_D: 2.23 Loss_G: 19.74 D(x): 0.79 D(G(z)): 0.25 / 0.22 | Acc 26.6 / 36.7\n[0/2][133/782] Loss_D: 2.23 Loss_G: 21.01 D(x): 0.89 D(G(z)): 0.41 / 0.08 | Acc 46.9 / 36.8\n[0/2][134/782] Loss_D: 2.08 Loss_G: 19.64 D(x): 0.74 D(G(z)): 0.17 / 0.27 | Acc 48.4 / 36.9\n[0/2][135/782] Loss_D: 2.37 Loss_G: 21.26 D(x): 0.93 D(G(z)): 0.49 / 0.06 | Acc 50.0 / 37.0\n[0/2][136/782] Loss_D: 2.01 Loss_G: 19.40 D(x): 0.78 D(G(z)): 0.13 / 0.25 | Acc 45.3 / 37.0\n[0/2][137/782] Loss_D: 2.04 Loss_G: 20.50 D(x): 0.91 D(G(z)): 0.45 / 0.10 | Acc 70.3 / 37.3\n[0/2][138/782] Loss_D: 1.77 Loss_G: 19.91 D(x): 0.84 D(G(z)): 0.18 / 0.15 | Acc 50.0 / 37.4\nsemi Test: [0/157]\t Prec@1 50.000 (50.000)\nsemi Test: [50/157]\t Prec@1 48.438 (43.811)\nsemi Test: [100/157]\t Prec@1 39.062 (43.657)\nsemi Test: [150/157]\t Prec@1 40.625 (43.377)\nsemi Test Prec@1 43.10\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 39.062 (43.536)\nsup Test: [100/157]\t Prec@1 35.938 (44.199)\nsup Test: [150/157]\t Prec@1 40.625 (43.543)\nsup Test Prec@1 43.48\n[0/2][139/782] Loss_D: 1.99 Loss_G: 20.61 D(x): 0.92 D(G(z)): 0.32 / 0.10 | Acc 50.0 / 37.5\n[0/2][140/782] Loss_D: 1.73 Loss_G: 19.58 D(x): 0.83 D(G(z)): 0.20 / 0.17 | Acc 57.8 / 37.6\n[0/2][141/782] Loss_D: 1.99 Loss_G: 19.94 D(x): 0.84 D(G(z)): 0.34 / 0.14 | Acc 54.7 / 37.7\n[0/2][142/782] Loss_D: 1.98 Loss_G: 20.48 D(x): 0.86 D(G(z)): 0.31 / 0.10 | Acc 54.7 / 37.8\n[0/2][143/782] Loss_D: 2.12 Loss_G: 18.97 D(x): 0.75 D(G(z)): 0.22 / 0.46 | Acc 37.5 / 37.8\n[0/2][144/782] Loss_D: 2.79 Loss_G: 21.83 D(x): 0.95 D(G(z)): 0.68 / 0.03 | Acc 57.8 / 38.0\n[0/2][145/782] Loss_D: 2.44 Loss_G: 18.89 D(x): 0.58 D(G(z)): 0.05 / 0.34 | Acc 21.9 / 37.9\n[0/2][146/782] Loss_D: 2.29 Loss_G: 20.01 D(x): 0.96 D(G(z)): 0.54 / 0.14 | Acc 62.5 / 38.0\n[0/2][147/782] Loss_D: 1.93 Loss_G: 20.21 D(x): 0.90 D(G(z)): 0.29 / 0.10 | Acc 50.0 / 38.1\n[0/2][148/782] Loss_D: 2.20 Loss_G: 19.06 D(x): 0.74 D(G(z)): 0.21 / 0.29 | Acc 42.2 / 38.2\nsemi Test: [0/157]\t Prec@1 40.625 (40.625)\nsemi Test: [50/157]\t Prec@1 37.500 (44.975)\nsemi Test: [100/157]\t Prec@1 31.250 (44.431)\nsemi Test: [150/157]\t Prec@1 45.312 (44.195)\nsemi Test Prec@1 44.02\nsup Test: [0/157]\t Prec@1 45.312 (45.312)\nsup Test: [50/157]\t Prec@1 43.750 (45.588)\nsup Test: [100/157]\t Prec@1 39.062 (45.111)\nsup Test: [150/157]\t Prec@1 43.750 (44.630)\nsup Test Prec@1 44.51\n[0/2][149/782] Loss_D: 2.12 Loss_G: 20.31 D(x): 0.94 D(G(z)): 0.51 / 0.08 | Acc 56.2 / 38.3\n[0/2][150/782] Loss_D: 1.95 Loss_G: 19.69 D(x): 0.72 D(G(z)): 0.16 / 0.18 | Acc 59.4 / 38.4\n[0/2][151/782] Loss_D: 2.10 Loss_G: 19.33 D(x): 0.88 D(G(z)): 0.35 / 0.19 | Acc 46.9 / 38.5\n[0/2][152/782] Loss_D: 1.97 Loss_G: 20.19 D(x): 0.90 D(G(z)): 0.39 / 0.08 | Acc 54.7 / 38.6\n[0/2][153/782] Loss_D: 2.25 Loss_G: 18.78 D(x): 0.69 D(G(z)): 0.17 / 0.41 | Acc 39.1 / 38.6\n[0/2][154/782] Loss_D: 2.54 Loss_G: 20.83 D(x): 0.94 D(G(z)): 0.62 / 0.05 | Acc 50.0 / 38.6\n[0/2][155/782] Loss_D: 2.22 Loss_G: 18.91 D(x): 0.65 D(G(z)): 0.10 / 0.30 | Acc 40.6 / 38.7\n[0/2][156/782] Loss_D: 2.15 Loss_G: 20.05 D(x): 0.95 D(G(z)): 0.49 / 0.10 | Acc 54.7 / 38.8\n[0/2][157/782] Loss_D: 1.87 Loss_G: 19.25 D(x): 0.77 D(G(z)): 0.20 / 0.17 | Acc 56.2 / 38.9\n[0/2][158/782] Loss_D: 1.85 Loss_G: 19.71 D(x): 0.89 D(G(z)): 0.32 / 0.13 | Acc 60.9 / 39.0\nsemi Test: [0/157]\t Prec@1 40.625 (40.625)\nsemi Test: [50/157]\t Prec@1 45.312 (43.903)\nsemi Test: [100/157]\t Prec@1 35.938 (44.075)\nsemi Test: [150/157]\t Prec@1 43.750 (44.402)\nsemi Test Prec@1 44.18\nsup Test: [0/157]\t Prec@1 37.500 (37.500)\nsup Test: [50/157]\t Prec@1 40.625 (44.210)\nsup Test: [100/157]\t Prec@1 35.938 (44.493)\nsup Test: [150/157]\t Prec@1 45.312 (44.123)\nsup Test Prec@1 44.14\n[0/2][159/782] Loss_D: 1.89 Loss_G: 19.25 D(x): 0.82 D(G(z)): 0.26 / 0.19 | Acc 57.8 / 39.1\n[0/2][160/782] Loss_D: 2.11 Loss_G: 19.73 D(x): 0.87 D(G(z)): 0.34 / 0.12 | Acc 48.4 / 39.2\n[0/2][161/782] Loss_D: 1.90 Loss_G: 19.23 D(x): 0.82 D(G(z)): 0.24 / 0.21 | Acc 57.8 / 39.3\n[0/2][162/782] Loss_D: 1.97 Loss_G: 19.50 D(x): 0.81 D(G(z)): 0.35 / 0.16 | Acc 65.6 / 39.5\n[0/2][163/782] Loss_D: 1.80 Loss_G: 19.94 D(x): 0.85 D(G(z)): 0.31 / 0.09 | Acc 65.6 / 39.6\n[0/2][164/782] Loss_D: 1.76 Loss_G: 18.96 D(x): 0.81 D(G(z)): 0.17 / 0.19 | Acc 50.0 / 39.7\n[0/2][165/782] Loss_D: 1.75 Loss_G: 19.76 D(x): 0.93 D(G(z)): 0.36 / 0.10 | Acc 62.5 / 39.8\n[0/2][166/782] Loss_D: 1.79 Loss_G: 18.88 D(x): 0.79 D(G(z)): 0.19 / 0.20 | Acc 60.9 / 40.0\n[0/2][167/782] Loss_D: 2.00 Loss_G: 19.92 D(x): 0.90 D(G(z)): 0.39 / 0.07 | Acc 54.7 / 40.0\n[0/2][168/782] Loss_D: 2.06 Loss_G: 18.65 D(x): 0.80 D(G(z)): 0.16 / 0.23 | Acc 37.5 / 40.0\nsemi Test: [0/157]\t Prec@1 43.750 (43.750)\nsemi Test: [50/157]\t Prec@1 39.062 (45.772)\nsemi Test: [100/157]\t Prec@1 35.938 (45.498)\nsemi Test: [150/157]\t Prec@1 46.875 (45.788)\nsemi Test Prec@1 45.53\nsup Test: [0/157]\t Prec@1 43.750 (43.750)\nsup Test: [50/157]\t Prec@1 42.188 (45.987)\nsup Test: [100/157]\t Prec@1 37.500 (45.931)\nsup Test: [150/157]\t Prec@1 45.312 (45.519)\nsup Test Prec@1 45.46\n[0/2][169/782] Loss_D: 2.07 Loss_G: 20.25 D(x): 0.91 D(G(z)): 0.44 / 0.07 | Acc 59.4 / 40.1\n[0/2][170/782] Loss_D: 2.16 Loss_G: 18.39 D(x): 0.75 D(G(z)): 0.13 / 0.37 | Acc 35.9 / 40.1\n[0/2][171/782] Loss_D: 2.31 Loss_G: 20.36 D(x): 0.95 D(G(z)): 0.53 / 0.04 | Acc 51.6 / 40.2\n[0/2][172/782] Loss_D: 2.07 Loss_G: 18.67 D(x): 0.70 D(G(z)): 0.09 / 0.26 | Acc 46.9 / 40.2\n[0/2][173/782] Loss_D: 1.99 Loss_G: 19.94 D(x): 0.95 D(G(z)): 0.42 / 0.10 | Acc 68.8 / 40.4\n[0/2][174/782] Loss_D: 1.64 Loss_G: 19.65 D(x): 0.89 D(G(z)): 0.18 / 0.09 | Acc 56.2 / 40.5\n[0/2][175/782] Loss_D: 1.81 Loss_G: 18.40 D(x): 0.81 D(G(z)): 0.17 / 0.29 | Acc 54.7 / 40.6\n[0/2][176/782] Loss_D: 1.96 Loss_G: 20.00 D(x): 0.94 D(G(z)): 0.50 / 0.06 | Acc 65.6 / 40.7\n[0/2][177/782] Loss_D: 1.91 Loss_G: 18.65 D(x): 0.78 D(G(z)): 0.13 / 0.20 | Acc 42.2 / 40.7\n[0/2][178/782] Loss_D: 1.99 Loss_G: 19.49 D(x): 0.91 D(G(z)): 0.38 / 0.14 | Acc 64.1 / 40.8\nsemi Test: [0/157]\t Prec@1 48.438 (48.438)\nsemi Test: [50/157]\t Prec@1 42.188 (45.741)\nsemi Test: [100/157]\t Prec@1 34.375 (45.204)\nsemi Test: [150/157]\t Prec@1 45.312 (44.630)\nsemi Test Prec@1 44.52\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 40.625 (45.312)\nsup Test: [100/157]\t Prec@1 34.375 (44.817)\nsup Test: [150/157]\t Prec@1 43.750 (44.443)\nsup Test Prec@1 44.38\n[0/2][179/782] Loss_D: 2.01 Loss_G: 18.46 D(x): 0.81 D(G(z)): 0.25 / 0.21 | Acc 45.3 / 40.9\n[0/2][180/782] Loss_D: 1.89 Loss_G: 19.60 D(x): 0.90 D(G(z)): 0.41 / 0.09 | Acc 65.6 / 41.0\n[0/2][181/782] Loss_D: 1.81 Loss_G: 19.11 D(x): 0.81 D(G(z)): 0.18 / 0.17 | Acc 50.0 / 41.0\n[0/2][182/782] Loss_D: 1.62 Loss_G: 19.29 D(x): 0.92 D(G(z)): 0.34 / 0.09 | Acc 73.4 / 41.2\n[0/2][183/782] Loss_D: 1.76 Loss_G: 18.78 D(x): 0.81 D(G(z)): 0.21 / 0.19 | Acc 60.9 / 41.3\n[0/2][184/782] Loss_D: 1.95 Loss_G: 19.09 D(x): 0.88 D(G(z)): 0.36 / 0.16 | Acc 57.8 / 41.4\n[0/2][185/782] Loss_D: 2.09 Loss_G: 18.97 D(x): 0.81 D(G(z)): 0.30 / 0.16 | Acc 50.0 / 41.5\n[0/2][186/782] Loss_D: 1.98 Loss_G: 19.12 D(x): 0.82 D(G(z)): 0.31 / 0.14 | Acc 57.8 / 41.6\n[0/2][187/782] Loss_D: 2.04 Loss_G: 17.97 D(x): 0.78 D(G(z)): 0.28 / 0.28 | Acc 53.1 / 41.6\n[0/2][188/782] Loss_D: 2.11 Loss_G: 19.27 D(x): 0.90 D(G(z)): 0.43 / 0.08 | Acc 56.2 / 41.7\nsemi Test: [0/157]\t Prec@1 48.438 (48.438)\nsemi Test: [50/157]\t Prec@1 43.750 (46.140)\nsemi Test: [100/157]\t Prec@1 39.062 (45.854)\nsemi Test: [150/157]\t Prec@1 42.188 (45.592)\nsemi Test Prec@1 45.34\nsup Test: [0/157]\t Prec@1 43.750 (43.750)\nsup Test: [50/157]\t Prec@1 48.438 (46.385)\nsup Test: [100/157]\t Prec@1 42.188 (45.684)\nsup Test: [150/157]\t Prec@1 45.312 (45.509)\nsup Test Prec@1 45.50\n[0/2][189/782] Loss_D: 1.74 Loss_G: 18.44 D(x): 0.80 D(G(z)): 0.16 / 0.19 | Acc 56.2 / 41.8\n[0/2][190/782] Loss_D: 1.67 Loss_G: 18.96 D(x): 0.89 D(G(z)): 0.35 / 0.12 | Acc 73.4 / 41.9\n[0/2][191/782] Loss_D: 1.90 Loss_G: 18.71 D(x): 0.86 D(G(z)): 0.23 / 0.12 | Acc 48.4 / 42.0\n[0/2][192/782] Loss_D: 1.82 Loss_G: 18.25 D(x): 0.87 D(G(z)): 0.24 / 0.20 | Acc 50.0 / 42.0\n[0/2][193/782] Loss_D: 1.74 Loss_G: 19.48 D(x): 0.94 D(G(z)): 0.42 / 0.05 | Acc 62.5 / 42.1\n[0/2][194/782] Loss_D: 1.98 Loss_G: 18.06 D(x): 0.78 D(G(z)): 0.11 / 0.24 | Acc 45.3 / 42.1\n[0/2][195/782] Loss_D: 1.93 Loss_G: 19.44 D(x): 0.93 D(G(z)): 0.47 / 0.06 | Acc 67.2 / 42.3\n[0/2][196/782] Loss_D: 2.12 Loss_G: 18.64 D(x): 0.75 D(G(z)): 0.16 / 0.16 | Acc 42.2 / 42.3\n[0/2][197/782] Loss_D: 1.89 Loss_G: 18.68 D(x): 0.90 D(G(z)): 0.33 / 0.14 | Acc 54.7 / 42.3\n[0/2][198/782] Loss_D: 1.74 Loss_G: 18.61 D(x): 0.90 D(G(z)): 0.27 / 0.11 | Acc 56.2 / 42.4\nsemi Test: [0/157]\t Prec@1 43.750 (43.750)\nsemi Test: [50/157]\t Prec@1 42.188 (46.538)\nsemi Test: [100/157]\t Prec@1 40.625 (46.426)\nsemi Test: [150/157]\t Prec@1 40.625 (46.316)\nsemi Test Prec@1 46.17\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 42.188 (44.730)\nsup Test: [100/157]\t Prec@1 35.938 (44.570)\nsup Test: [150/157]\t Prec@1 40.625 (44.443)\nsup Test Prec@1 44.36\n[0/2][199/782] Loss_D: 1.61 Loss_G: 18.94 D(x): 0.90 D(G(z)): 0.24 / 0.10 | Acc 60.9 / 42.5\n[0/2][200/782] Loss_D: 1.65 Loss_G: 18.44 D(x): 0.86 D(G(z)): 0.19 / 0.16 | Acc 59.4 / 42.6\n[0/2][201/782] Loss_D: 1.65 Loss_G: 18.45 D(x): 0.90 D(G(z)): 0.29 / 0.11 | Acc 65.6 / 42.7\n[0/2][202/782] Loss_D: 1.71 Loss_G: 19.16 D(x): 0.91 D(G(z)): 0.22 / 0.08 | Acc 57.8 / 42.8\n[0/2][203/782] Loss_D: 1.63 Loss_G: 18.14 D(x): 0.88 D(G(z)): 0.16 / 0.17 | Acc 53.1 / 42.8\n[0/2][204/782] Loss_D: 1.67 Loss_G: 19.04 D(x): 0.93 D(G(z)): 0.33 / 0.09 | Acc 60.9 / 42.9\n[0/2][205/782] Loss_D: 1.87 Loss_G: 18.74 D(x): 0.87 D(G(z)): 0.20 / 0.17 | Acc 46.9 / 42.9\n[0/2][206/782] Loss_D: 2.15 Loss_G: 18.73 D(x): 0.83 D(G(z)): 0.34 / 0.17 | Acc 53.1 / 43.0\n[0/2][207/782] Loss_D: 2.00 Loss_G: 19.22 D(x): 0.82 D(G(z)): 0.33 / 0.14 | Acc 60.9 / 43.1\n[0/2][208/782] Loss_D: 2.14 Loss_G: 18.52 D(x): 0.80 D(G(z)): 0.29 / 0.16 | Acc 46.9 / 43.1\nsemi Test: [0/157]\t Prec@1 48.438 (48.438)\nsemi Test: [50/157]\t Prec@1 48.438 (46.507)\nsemi Test: [100/157]\t Prec@1 48.438 (46.287)\nsemi Test: [150/157]\t Prec@1 45.312 (46.275)\nsemi Test Prec@1 46.01\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 42.188 (46.385)\nsup Test: [100/157]\t Prec@1 35.938 (45.869)\nsup Test: [150/157]\t Prec@1 48.438 (45.582)\nsup Test Prec@1 45.44\n[0/2][209/782] Loss_D: 1.79 Loss_G: 18.73 D(x): 0.84 D(G(z)): 0.32 / 0.14 | Acc 71.9 / 43.2\n[0/2][210/782] Loss_D: 1.91 Loss_G: 18.81 D(x): 0.87 D(G(z)): 0.28 / 0.14 | Acc 43.8 / 43.2\n[0/2][211/782] Loss_D: 1.77 Loss_G: 19.14 D(x): 0.87 D(G(z)): 0.27 / 0.09 | Acc 59.4 / 43.3\n[0/2][212/782] Loss_D: 1.75 Loss_G: 18.30 D(x): 0.87 D(G(z)): 0.18 / 0.17 | Acc 51.6 / 43.3\n[0/2][213/782] Loss_D: 1.69 Loss_G: 18.95 D(x): 0.93 D(G(z)): 0.32 / 0.06 | Acc 64.1 / 43.4\n[0/2][214/782] Loss_D: 1.63 Loss_G: 17.52 D(x): 0.82 D(G(z)): 0.14 / 0.21 | Acc 60.9 / 43.5\n[0/2][215/782] Loss_D: 1.88 Loss_G: 19.76 D(x): 0.96 D(G(z)): 0.45 / 0.03 | Acc 60.9 / 43.6\n[0/2][216/782] Loss_D: 1.91 Loss_G: 17.37 D(x): 0.81 D(G(z)): 0.06 / 0.29 | Acc 32.8 / 43.5\n[0/2][217/782] Loss_D: 1.97 Loss_G: 19.10 D(x): 0.96 D(G(z)): 0.49 / 0.05 | Acc 64.1 / 43.6\n[0/2][218/782] Loss_D: 1.60 Loss_G: 18.52 D(x): 0.83 D(G(z)): 0.12 / 0.11 | Acc 62.5 / 43.7\nsemi Test: [0/157]\t Prec@1 42.188 (42.188)\nsemi Test: [50/157]\t Prec@1 54.688 (46.844)\nsemi Test: [100/157]\t Prec@1 42.188 (46.256)\nsemi Test: [150/157]\t Prec@1 40.625 (45.726)\nsemi Test Prec@1 45.50\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 46.875 (45.619)\nsup Test: [100/157]\t Prec@1 35.938 (45.343)\nsup Test: [150/157]\t Prec@1 48.438 (44.661)\nsup Test Prec@1 44.58\n[0/2][219/782] Loss_D: 1.82 Loss_G: 17.34 D(x): 0.83 D(G(z)): 0.23 / 0.25 | Acc 54.7 / 43.8\n[0/2][220/782] Loss_D: 2.03 Loss_G: 19.45 D(x): 0.96 D(G(z)): 0.46 / 0.03 | Acc 60.9 / 43.8\n[0/2][221/782] Loss_D: 1.95 Loss_G: 17.23 D(x): 0.69 D(G(z)): 0.06 / 0.32 | Acc 42.2 / 43.8\n[0/2][222/782] Loss_D: 2.01 Loss_G: 19.31 D(x): 0.97 D(G(z)): 0.52 / 0.05 | Acc 71.9 / 44.0\n[0/2][223/782] Loss_D: 1.84 Loss_G: 17.89 D(x): 0.74 D(G(z)): 0.09 / 0.21 | Acc 51.6 / 44.0\n[0/2][224/782] Loss_D: 1.65 Loss_G: 18.57 D(x): 0.96 D(G(z)): 0.38 / 0.09 | Acc 67.2 / 44.1\n[0/2][225/782] Loss_D: 1.40 Loss_G: 18.71 D(x): 0.92 D(G(z)): 0.19 / 0.07 | Acc 67.2 / 44.2\n[0/2][226/782] Loss_D: 1.63 Loss_G: 18.03 D(x): 0.87 D(G(z)): 0.14 / 0.14 | Acc 56.2 / 44.3\n[0/2][227/782] Loss_D: 1.36 Loss_G: 18.00 D(x): 0.94 D(G(z)): 0.24 / 0.14 | Acc 70.3 / 44.4\n[0/2][228/782] Loss_D: 1.54 Loss_G: 18.41 D(x): 0.91 D(G(z)): 0.26 / 0.08 | Acc 68.8 / 44.5\nsemi Test: [0/157]\t Prec@1 39.062 (39.062)\nsemi Test: [50/157]\t Prec@1 40.625 (46.324)\nsemi Test: [100/157]\t Prec@1 37.500 (45.993)\nsemi Test: [150/157]\t Prec@1 40.625 (46.182)\nsemi Test Prec@1 46.00\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 40.625 (47.488)\nsup Test: [100/157]\t Prec@1 42.188 (46.890)\nsup Test: [150/157]\t Prec@1 46.875 (46.358)\nsup Test Prec@1 46.18\n[0/2][229/782] Loss_D: 1.54 Loss_G: 18.25 D(x): 0.88 D(G(z)): 0.15 / 0.12 | Acc 64.1 / 44.6\n[0/2][230/782] Loss_D: 1.72 Loss_G: 17.72 D(x): 0.87 D(G(z)): 0.20 / 0.14 | Acc 57.8 / 44.6\n[0/2][231/782] Loss_D: 1.45 Loss_G: 18.99 D(x): 0.96 D(G(z)): 0.27 / 0.05 | Acc 64.1 / 44.7\n[0/2][232/782] Loss_D: 1.75 Loss_G: 18.21 D(x): 0.86 D(G(z)): 0.11 / 0.10 | Acc 56.2 / 44.7\n[0/2][233/782] Loss_D: 1.34 Loss_G: 18.24 D(x): 0.91 D(G(z)): 0.23 / 0.09 | Acc 78.1 / 44.9\n[0/2][234/782] Loss_D: 1.47 Loss_G: 18.65 D(x): 0.94 D(G(z)): 0.22 / 0.06 | Acc 59.4 / 45.0\n[0/2][235/782] Loss_D: 1.42 Loss_G: 17.53 D(x): 0.76 D(G(z)): 0.13 / 0.17 | Acc 71.9 / 45.1\n[0/2][236/782] Loss_D: 1.96 Loss_G: 18.98 D(x): 0.95 D(G(z)): 0.39 / 0.04 | Acc 56.2 / 45.1\n[0/2][237/782] Loss_D: 1.71 Loss_G: 18.36 D(x): 0.87 D(G(z)): 0.07 / 0.06 | Acc 54.7 / 45.2\n[0/2][238/782] Loss_D: 1.54 Loss_G: 17.32 D(x): 0.88 D(G(z)): 0.14 / 0.18 | Acc 59.4 / 45.2\nsemi Test: [0/157]\t Prec@1 40.625 (40.625)\nsemi Test: [50/157]\t Prec@1 48.438 (45.558)\nsemi Test: [100/157]\t Prec@1 35.938 (45.684)\nsemi Test: [150/157]\t Prec@1 40.625 (45.561)\nsemi Test Prec@1 45.28\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 39.062 (44.148)\nsup Test: [100/157]\t Prec@1 46.875 (44.524)\nsup Test: [150/157]\t Prec@1 46.875 (43.957)\nsup Test Prec@1 43.88\n[0/2][239/782] Loss_D: 1.64 Loss_G: 18.66 D(x): 0.95 D(G(z)): 0.37 / 0.05 | Acc 64.1 / 45.3\n[0/2][240/782] Loss_D: 1.37 Loss_G: 18.45 D(x): 0.90 D(G(z)): 0.10 / 0.05 | Acc 60.9 / 45.4\n[0/2][241/782] Loss_D: 1.33 Loss_G: 18.16 D(x): 0.94 D(G(z)): 0.12 / 0.09 | Acc 62.5 / 45.4\n[0/2][242/782] Loss_D: 1.53 Loss_G: 18.20 D(x): 0.93 D(G(z)): 0.23 / 0.06 | Acc 60.9 / 45.5\n[0/2][243/782] Loss_D: 1.44 Loss_G: 17.90 D(x): 0.87 D(G(z)): 0.16 / 0.10 | Acc 75.0 / 45.6\n[0/2][244/782] Loss_D: 1.56 Loss_G: 18.93 D(x): 0.95 D(G(z)): 0.26 / 0.04 | Acc 65.6 / 45.7\n[0/2][245/782] Loss_D: 1.57 Loss_G: 17.23 D(x): 0.80 D(G(z)): 0.11 / 0.17 | Acc 57.8 / 45.7\n[0/2][246/782] Loss_D: 2.13 Loss_G: 19.87 D(x): 0.92 D(G(z)): 0.56 / 0.02 | Acc 67.2 / 45.8\n[0/2][247/782] Loss_D: 2.65 Loss_G: 17.28 D(x): 0.62 D(G(z)): 0.07 / 0.27 | Acc 37.5 / 45.8\n[0/2][248/782] Loss_D: 2.36 Loss_G: 18.30 D(x): 0.94 D(G(z)): 0.57 / 0.06 | Acc 68.8 / 45.9\nsemi Test: [0/157]\t Prec@1 46.875 (46.875)\nsemi Test: [50/157]\t Prec@1 54.688 (46.967)\nsemi Test: [100/157]\t Prec@1 40.625 (46.627)\nsemi Test: [150/157]\t Prec@1 37.500 (46.275)\nsemi Test Prec@1 46.12\nsup Test: [0/157]\t Prec@1 35.938 (35.938)\nsup Test: [50/157]\t Prec@1 39.062 (45.282)\nsup Test: [100/157]\t Prec@1 40.625 (45.514)\nsup Test: [150/157]\t Prec@1 43.750 (45.023)\nsup Test Prec@1 44.96\n[0/2][249/782] Loss_D: 1.60 Loss_G: 18.66 D(x): 0.90 D(G(z)): 0.17 / 0.06 | Acc 57.8 / 45.9\n[0/2][250/782] Loss_D: 1.47 Loss_G: 18.54 D(x): 0.89 D(G(z)): 0.20 / 0.06 | Acc 67.2 / 46.0\n[0/2][251/782] Loss_D: 1.66 Loss_G: 17.51 D(x): 0.86 D(G(z)): 0.20 / 0.12 | Acc 59.4 / 46.1\n[0/2][252/782] Loss_D: 1.66 Loss_G: 18.42 D(x): 0.92 D(G(z)): 0.32 / 0.04 | Acc 65.6 / 46.2\n[0/2][253/782] Loss_D: 1.29 Loss_G: 17.68 D(x): 0.88 D(G(z)): 0.13 / 0.09 | Acc 71.9 / 46.3\n[0/2][254/782] Loss_D: 1.58 Loss_G: 18.24 D(x): 0.89 D(G(z)): 0.31 / 0.07 | Acc 64.1 / 46.3\n[0/2][255/782] Loss_D: 1.67 Loss_G: 18.32 D(x): 0.92 D(G(z)): 0.26 / 0.07 | Acc 57.8 / 46.4\n[0/2][256/782] Loss_D: 1.77 Loss_G: 18.29 D(x): 0.85 D(G(z)): 0.31 / 0.05 | Acc 64.1 / 46.4\n[0/2][257/782] Loss_D: 2.06 Loss_G: 16.77 D(x): 0.79 D(G(z)): 0.21 / 0.19 | Acc 53.1 / 46.5\n[0/2][258/782] Loss_D: 1.88 Loss_G: 19.35 D(x): 0.92 D(G(z)): 0.49 / 0.02 | Acc 76.6 / 46.6\nsemi Test: [0/157]\t Prec@1 43.750 (43.750)\nsemi Test: [50/157]\t Prec@1 45.312 (44.975)\nsemi Test: [100/157]\t Prec@1 40.625 (44.338)\nsemi Test: [150/157]\t Prec@1 43.750 (44.412)\nsemi Test Prec@1 44.25\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 43.750 (46.783)\nsup Test: [100/157]\t Prec@1 35.938 (46.210)\nsup Test: [150/157]\t Prec@1 48.438 (46.223)\nsup Test Prec@1 46.06\n[0/2][259/782] Loss_D: 2.35 Loss_G: 15.73 D(x): 0.64 D(G(z)): 0.04 / 0.54 | Acc 25.0 / 46.5\n[0/2][260/782] Loss_D: 2.66 Loss_G: 18.75 D(x): 0.99 D(G(z)): 0.74 / 0.03 | Acc 79.7 / 46.6\n[0/2][261/782] Loss_D: 1.56 Loss_G: 18.18 D(x): 0.84 D(G(z)): 0.06 / 0.06 | Acc 53.1 / 46.6\n[0/2][262/782] Loss_D: 1.57 Loss_G: 16.82 D(x): 0.87 D(G(z)): 0.13 / 0.20 | Acc 57.8 / 46.7\n[0/2][263/782] Loss_D: 1.80 Loss_G: 18.00 D(x): 0.97 D(G(z)): 0.42 / 0.06 | Acc 68.8 / 46.8\n[0/2][264/782] Loss_D: 1.60 Loss_G: 17.67 D(x): 0.90 D(G(z)): 0.13 / 0.08 | Acc 54.7 / 46.8\n[0/2][265/782] Loss_D: 1.53 Loss_G: 17.75 D(x): 0.90 D(G(z)): 0.21 / 0.11 | Acc 62.5 / 46.9\n[0/2][266/782] Loss_D: 1.61 Loss_G: 17.78 D(x): 0.90 D(G(z)): 0.28 / 0.07 | Acc 68.8 / 46.9\n[0/2][267/782] Loss_D: 1.89 Loss_G: 17.09 D(x): 0.83 D(G(z)): 0.22 / 0.14 | Acc 53.1 / 47.0\n[0/2][268/782] Loss_D: 1.66 Loss_G: 18.19 D(x): 0.91 D(G(z)): 0.37 / 0.05 | Acc 68.8 / 47.0\nsemi Test: [0/157]\t Prec@1 35.938 (35.938)\nsemi Test: [50/157]\t Prec@1 45.312 (47.702)\nsemi Test: [100/157]\t Prec@1 42.188 (47.772)\nsemi Test: [150/157]\t Prec@1 40.625 (47.382)\nsemi Test Prec@1 47.10\nsup Test: [0/157]\t Prec@1 43.750 (43.750)\nsup Test: [50/157]\t Prec@1 43.750 (45.588)\nsup Test: [100/157]\t Prec@1 34.375 (45.421)\nsup Test: [150/157]\t Prec@1 46.875 (45.012)\nsup Test Prec@1 44.91\n[0/2][269/782] Loss_D: 1.86 Loss_G: 16.77 D(x): 0.82 D(G(z)): 0.14 / 0.15 | Acc 40.6 / 47.0\n[0/2][270/782] Loss_D: 1.52 Loss_G: 18.20 D(x): 0.94 D(G(z)): 0.40 / 0.04 | Acc 68.8 / 47.1\n[0/2][271/782] Loss_D: 1.48 Loss_G: 17.46 D(x): 0.86 D(G(z)): 0.11 / 0.07 | Acc 60.9 / 47.2\n[0/2][272/782] Loss_D: 1.37 Loss_G: 17.18 D(x): 0.91 D(G(z)): 0.20 / 0.11 | Acc 65.6 / 47.2\n[0/2][273/782] Loss_D: 1.65 Loss_G: 17.42 D(x): 0.90 D(G(z)): 0.31 / 0.07 | Acc 67.2 / 47.3\n[0/2][274/782] Loss_D: 1.42 Loss_G: 17.59 D(x): 0.88 D(G(z)): 0.19 / 0.07 | Acc 65.6 / 47.4\n[0/2][275/782] Loss_D: 1.64 Loss_G: 17.02 D(x): 0.88 D(G(z)): 0.20 / 0.13 | Acc 68.8 / 47.4\n[0/2][276/782] Loss_D: 1.73 Loss_G: 18.47 D(x): 0.93 D(G(z)): 0.39 / 0.03 | Acc 62.5 / 47.5\n[0/2][277/782] Loss_D: 1.86 Loss_G: 17.08 D(x): 0.81 D(G(z)): 0.10 / 0.12 | Acc 46.9 / 47.5\n[0/2][278/782] Loss_D: 2.31 Loss_G: 19.26 D(x): 0.94 D(G(z)): 0.58 / 0.01 | Acc 60.9 / 47.5\nsemi Test: [0/157]\t Prec@1 45.312 (45.312)\nsemi Test: [50/157]\t Prec@1 51.562 (46.875)\nsemi Test: [100/157]\t Prec@1 46.875 (46.194)\nsemi Test: [150/157]\t Prec@1 45.312 (46.306)\nsemi Test Prec@1 46.06\nsup Test: [0/157]\t Prec@1 45.312 (45.312)\nsup Test: [50/157]\t Prec@1 45.312 (45.190)\nsup Test: [100/157]\t Prec@1 35.938 (45.514)\nsup Test: [150/157]\t Prec@1 42.188 (44.950)\nsup Test Prec@1 44.88\n[0/2][279/782] Loss_D: 2.17 Loss_G: 15.68 D(x): 0.56 D(G(z)): 0.06 / 0.40 | Acc 46.9 / 47.5\n[0/2][280/782] Loss_D: 3.06 Loss_G: 19.24 D(x): 0.96 D(G(z)): 0.82 / 0.02 | Acc 76.6 / 47.6\n[0/2][281/782] Loss_D: 3.93 Loss_G: 15.22 D(x): 0.34 D(G(z)): 0.04 / 0.55 | Acc 9.4 / 47.5\n[0/2][282/782] Loss_D: 2.69 Loss_G: 16.36 D(x): 0.99 D(G(z)): 0.70 / 0.19 | Acc 62.5 / 47.6\n[0/2][283/782] Loss_D: 1.62 Loss_G: 17.78 D(x): 0.97 D(G(z)): 0.35 / 0.06 | Acc 67.2 / 47.6\n[0/2][284/782] Loss_D: 1.75 Loss_G: 17.01 D(x): 0.88 D(G(z)): 0.16 / 0.11 | Acc 50.0 / 47.6\n[0/2][285/782] Loss_D: 2.06 Loss_G: 17.14 D(x): 0.86 D(G(z)): 0.48 / 0.12 | Acc 68.8 / 47.7\n[0/2][286/782] Loss_D: 2.44 Loss_G: 17.46 D(x): 0.84 D(G(z)): 0.51 / 0.07 | Acc 48.4 / 47.7\n[0/2][287/782] Loss_D: 2.60 Loss_G: 16.00 D(x): 0.59 D(G(z)): 0.26 / 0.32 | Acc 43.8 / 47.7\n[0/2][288/782] Loss_D: 2.28 Loss_G: 17.03 D(x): 0.89 D(G(z)): 0.53 / 0.12 | Acc 60.9 / 47.7\nsemi Test: [0/157]\t Prec@1 43.750 (43.750)\nsemi Test: [50/157]\t Prec@1 51.562 (45.129)\nsemi Test: [100/157]\t Prec@1 42.188 (45.204)\nsemi Test: [150/157]\t Prec@1 42.188 (45.375)\nsemi Test Prec@1 45.18\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 45.312 (46.293)\nsup Test: [100/157]\t Prec@1 40.625 (45.869)\nsup Test: [150/157]\t Prec@1 50.000 (46.068)\nsup Test Prec@1 45.93\n[0/2][289/782] Loss_D: 1.77 Loss_G: 17.06 D(x): 0.89 D(G(z)): 0.27 / 0.09 | Acc 51.6 / 47.8\n[0/2][290/782] Loss_D: 1.79 Loss_G: 16.60 D(x): 0.83 D(G(z)): 0.22 / 0.16 | Acc 51.6 / 47.8\n[0/2][291/782] Loss_D: 1.75 Loss_G: 16.67 D(x): 0.91 D(G(z)): 0.39 / 0.10 | Acc 64.1 / 47.8\n[0/2][292/782] Loss_D: 1.65 Loss_G: 17.08 D(x): 0.87 D(G(z)): 0.28 / 0.08 | Acc 64.1 / 47.9\n[0/2][293/782] Loss_D: 1.93 Loss_G: 16.60 D(x): 0.81 D(G(z)): 0.25 / 0.17 | Acc 53.1 / 47.9\n[0/2][294/782] Loss_D: 2.03 Loss_G: 17.13 D(x): 0.83 D(G(z)): 0.49 / 0.09 | Acc 62.5 / 48.0\n[0/2][295/782] Loss_D: 1.92 Loss_G: 16.58 D(x): 0.79 D(G(z)): 0.27 / 0.13 | Acc 50.0 / 48.0\n[0/2][296/782] Loss_D: 1.92 Loss_G: 16.65 D(x): 0.88 D(G(z)): 0.37 / 0.11 | Acc 59.4 / 48.0\n[0/2][297/782] Loss_D: 1.73 Loss_G: 16.71 D(x): 0.84 D(G(z)): 0.32 / 0.11 | Acc 64.1 / 48.0\n[0/2][298/782] Loss_D: 1.97 Loss_G: 15.70 D(x): 0.74 D(G(z)): 0.30 / 0.24 | Acc 54.7 / 48.1\nsemi Test: [0/157]\t Prec@1 48.438 (48.438)\nsemi Test: [50/157]\t Prec@1 48.438 (47.273)\nsemi Test: [100/157]\t Prec@1 42.188 (46.829)\nsemi Test: [150/157]\t Prec@1 43.750 (46.978)\nsemi Test Prec@1 46.86\nsup Test: [0/157]\t Prec@1 45.312 (45.312)\nsup Test: [50/157]\t Prec@1 42.188 (47.426)\nsup Test: [100/157]\t Prec@1 48.438 (46.983)\nsup Test: [150/157]\t Prec@1 48.438 (46.596)\nsup Test Prec@1 46.55\n[0/2][299/782] Loss_D: 2.04 Loss_G: 17.96 D(x): 0.92 D(G(z)): 0.52 / 0.04 | Acc 67.2 / 48.1\n[0/2][300/782] Loss_D: 2.13 Loss_G: 15.16 D(x): 0.63 D(G(z)): 0.11 / 0.47 | Acc 34.4 / 48.1\n[0/2][301/782] Loss_D: 2.32 Loss_G: 17.44 D(x): 0.97 D(G(z)): 0.69 / 0.05 | Acc 73.4 / 48.2\n[0/2][302/782] Loss_D: 1.74 Loss_G: 16.89 D(x): 0.80 D(G(z)): 0.13 / 0.08 | Acc 60.9 / 48.2\n[0/2][303/782] Loss_D: 1.69 Loss_G: 15.86 D(x): 0.82 D(G(z)): 0.20 / 0.24 | Acc 56.2 / 48.2\n[0/2][304/782] Loss_D: 1.90 Loss_G: 16.62 D(x): 0.90 D(G(z)): 0.46 / 0.10 | Acc 68.8 / 48.3\n[0/2][305/782] Loss_D: 1.60 Loss_G: 16.15 D(x): 0.85 D(G(z)): 0.23 / 0.14 | Acc 59.4 / 48.3\n[0/2][306/782] Loss_D: 1.74 Loss_G: 16.30 D(x): 0.83 D(G(z)): 0.30 / 0.15 | Acc 60.9 / 48.4\n[0/2][307/782] Loss_D: 1.75 Loss_G: 16.08 D(x): 0.84 D(G(z)): 0.33 / 0.17 | Acc 57.8 / 48.4\n[0/2][308/782] Loss_D: 1.68 Loss_G: 16.43 D(x): 0.85 D(G(z)): 0.34 / 0.12 | Acc 67.2 / 48.5\nsemi Test: [0/157]\t Prec@1 39.062 (39.062)\nsemi Test: [50/157]\t Prec@1 51.562 (45.711)\nsemi Test: [100/157]\t Prec@1 40.625 (45.854)\nsemi Test: [150/157]\t Prec@1 45.312 (46.058)\nsemi Test Prec@1 45.94\nsup Test: [0/157]\t Prec@1 43.750 (43.750)\nsup Test: [50/157]\t Prec@1 48.438 (47.304)\nsup Test: [100/157]\t Prec@1 35.938 (47.277)\nsup Test: [150/157]\t Prec@1 48.438 (46.947)\nsup Test Prec@1 46.87\n[0/2][309/782] Loss_D: 1.79 Loss_G: 15.67 D(x): 0.75 D(G(z)): 0.27 / 0.20 | Acc 68.8 / 48.5\n[0/2][310/782] Loss_D: 1.85 Loss_G: 17.01 D(x): 0.91 D(G(z)): 0.42 / 0.07 | Acc 67.2 / 48.6\n[0/2][311/782] Loss_D: 1.87 Loss_G: 15.11 D(x): 0.74 D(G(z)): 0.16 / 0.34 | Acc 53.1 / 48.6\n[0/2][312/782] Loss_D: 2.13 Loss_G: 17.31 D(x): 0.93 D(G(z)): 0.55 / 0.06 | Acc 67.2 / 48.7\n[0/2][313/782] Loss_D: 1.98 Loss_G: 15.74 D(x): 0.68 D(G(z)): 0.13 / 0.24 | Acc 54.7 / 48.7\n[0/2][314/782] Loss_D: 1.82 Loss_G: 16.42 D(x): 0.93 D(G(z)): 0.44 / 0.09 | Acc 67.2 / 48.8\n[0/2][315/782] Loss_D: 1.50 Loss_G: 16.26 D(x): 0.82 D(G(z)): 0.22 / 0.10 | Acc 67.2 / 48.8\n[0/2][316/782] Loss_D: 1.58 Loss_G: 16.06 D(x): 0.85 D(G(z)): 0.24 / 0.15 | Acc 64.1 / 48.9\n[0/2][317/782] Loss_D: 1.68 Loss_G: 16.65 D(x): 0.91 D(G(z)): 0.33 / 0.10 | Acc 64.1 / 48.9\n[0/2][318/782] Loss_D: 1.84 Loss_G: 15.98 D(x): 0.82 D(G(z)): 0.26 / 0.14 | Acc 59.4 / 48.9\nsemi Test: [0/157]\t Prec@1 45.312 (45.312)\nsemi Test: [50/157]\t Prec@1 43.750 (47.028)\nsemi Test: [100/157]\t Prec@1 46.875 (47.061)\nsemi Test: [150/157]\t Prec@1 42.188 (46.575)\nsemi Test Prec@1 46.38\nsup Test: [0/157]\t Prec@1 46.875 (46.875)\nsup Test: [50/157]\t Prec@1 45.312 (46.507)\nsup Test: [100/157]\t Prec@1 42.188 (46.411)\nsup Test: [150/157]\t Prec@1 42.188 (46.037)\nsup Test Prec@1 45.96\n[0/2][319/782] Loss_D: 1.65 Loss_G: 16.55 D(x): 0.89 D(G(z)): 0.33 / 0.08 | Acc 70.3 / 49.0\n[0/2][320/782] Loss_D: 1.51 Loss_G: 16.24 D(x): 0.84 D(G(z)): 0.20 / 0.10 | Acc 64.1 / 49.1\n[0/2][321/782] Loss_D: 1.62 Loss_G: 15.72 D(x): 0.82 D(G(z)): 0.26 / 0.21 | Acc 65.6 / 49.1\n[0/2][322/782] Loss_D: 1.82 Loss_G: 16.75 D(x): 0.91 D(G(z)): 0.41 / 0.07 | Acc 73.4 / 49.2\n[0/2][323/782] Loss_D: 1.80 Loss_G: 15.63 D(x): 0.75 D(G(z)): 0.19 / 0.23 | Acc 56.2 / 49.2\n[0/2][324/782] Loss_D: 1.81 Loss_G: 16.79 D(x): 0.89 D(G(z)): 0.45 / 0.08 | Acc 75.0 / 49.3\n[0/2][325/782] Loss_D: 1.70 Loss_G: 16.25 D(x): 0.85 D(G(z)): 0.22 / 0.12 | Acc 60.9 / 49.3\n[0/2][326/782] Loss_D: 1.93 Loss_G: 15.60 D(x): 0.78 D(G(z)): 0.32 / 0.23 | Acc 59.4 / 49.4\n[0/2][327/782] Loss_D: 2.01 Loss_G: 16.79 D(x): 0.89 D(G(z)): 0.50 / 0.05 | Acc 62.5 / 49.4\n[0/2][328/782] Loss_D: 1.92 Loss_G: 14.66 D(x): 0.67 D(G(z)): 0.16 / 0.45 | Acc 53.1 / 49.4\nsemi Test: [0/157]\t Prec@1 42.188 (42.188)\nsemi Test: [50/157]\t Prec@1 51.562 (47.518)\nsemi Test: [100/157]\t Prec@1 43.750 (47.355)\nsemi Test: [150/157]\t Prec@1 48.438 (47.506)\nsemi Test Prec@1 47.36\nsup Test: [0/157]\t Prec@1 43.750 (43.750)\nsup Test: [50/157]\t Prec@1 50.000 (48.223)\nsup Test: [100/157]\t Prec@1 34.375 (47.834)\nsup Test: [150/157]\t Prec@1 48.438 (47.423)\nsup Test Prec@1 47.31\n[0/2][329/782] Loss_D: 2.25 Loss_G: 17.17 D(x): 0.94 D(G(z)): 0.69 / 0.04 | Acc 73.4 / 49.5\n[0/2][330/782] Loss_D: 2.28 Loss_G: 15.03 D(x): 0.59 D(G(z)): 0.11 / 0.34 | Acc 34.4 / 49.4\n[0/2][331/782] Loss_D: 2.20 Loss_G: 16.35 D(x): 0.93 D(G(z)): 0.55 / 0.09 | Acc 62.5 / 49.5\n[0/2][332/782] Loss_D: 1.89 Loss_G: 16.01 D(x): 0.82 D(G(z)): 0.23 / 0.12 | Acc 54.7 / 49.5\n[0/2][333/782] Loss_D: 1.48 Loss_G: 16.02 D(x): 0.91 D(G(z)): 0.30 / 0.12 | Acc 70.3 / 49.5\n[0/2][334/782] Loss_D: 1.60 Loss_G: 15.91 D(x): 0.86 D(G(z)): 0.28 / 0.14 | Acc 68.8 / 49.6\n[0/2][335/782] Loss_D: 2.02 Loss_G: 16.10 D(x): 0.86 D(G(z)): 0.41 / 0.13 | Acc 60.9 / 49.6\n[0/2][336/782] Loss_D: 1.83 Loss_G: 15.83 D(x): 0.78 D(G(z)): 0.35 / 0.20 | Acc 65.6 / 49.7\n[0/2][337/782] Loss_D: 2.09 Loss_G: 15.89 D(x): 0.80 D(G(z)): 0.38 / 0.22 | Acc 56.2 / 49.7\n[0/2][338/782] Loss_D: 2.11 Loss_G: 15.69 D(x): 0.81 D(G(z)): 0.40 / 0.14 | Acc 57.8 / 49.7\nsemi Test: [0/157]\t Prec@1 48.438 (48.438)\nsemi Test: [50/157]\t Prec@1 50.000 (48.407)\nsemi Test: [100/157]\t Prec@1 39.062 (48.391)\nsemi Test: [150/157]\t Prec@1 45.312 (48.044)\nsemi Test Prec@1 47.88\nsup Test: [0/157]\t Prec@1 43.750 (43.750)\nsup Test: [50/157]\t Prec@1 46.875 (47.243)\nsup Test: [100/157]\t Prec@1 39.062 (47.587)\nsup Test: [150/157]\t Prec@1 48.438 (47.134)\nsup Test Prec@1 47.03\n[0/2][339/782] Loss_D: 1.78 Loss_G: 15.59 D(x): 0.84 D(G(z)): 0.28 / 0.20 | Acc 57.8 / 49.8\n[0/2][340/782] Loss_D: 2.08 Loss_G: 15.92 D(x): 0.88 D(G(z)): 0.42 / 0.12 | Acc 56.2 / 49.8\n[0/2][341/782] Loss_D: 1.77 Loss_G: 15.74 D(x): 0.79 D(G(z)): 0.30 / 0.17 | Acc 65.6 / 49.8\n[0/2][342/782] Loss_D: 1.90 Loss_G: 15.63 D(x): 0.81 D(G(z)): 0.40 / 0.12 | Acc 67.2 / 49.9\n[0/2][343/782] Loss_D: 2.30 Loss_G: 14.65 D(x): 0.71 D(G(z)): 0.38 / 0.40 | Acc 51.6 / 49.9\n[0/2][344/782] Loss_D: 2.45 Loss_G: 17.19 D(x): 0.88 D(G(z)): 0.62 / 0.03 | Acc 68.8 / 49.9\n[0/2][345/782] Loss_D: 2.49 Loss_G: 14.01 D(x): 0.54 D(G(z)): 0.12 / 0.66 | Acc 25.0 / 49.9\n[0/2][346/782] Loss_D: 2.71 Loss_G: 16.89 D(x): 0.97 D(G(z)): 0.78 / 0.06 | Acc 76.6 / 49.9\n[0/2][347/782] Loss_D: 2.11 Loss_G: 15.29 D(x): 0.66 D(G(z)): 0.18 / 0.24 | Acc 54.7 / 49.9\n[0/2][348/782] Loss_D: 1.99 Loss_G: 15.04 D(x): 0.87 D(G(z)): 0.43 / 0.21 | Acc 51.6 / 50.0\nsemi Test: [0/157]\t Prec@1 37.500 (37.500)\nsemi Test: [50/157]\t Prec@1 46.875 (47.457)\nsemi Test: [100/157]\t Prec@1 37.500 (47.277)\nsemi Test: [150/157]\t Prec@1 45.312 (47.103)\nsemi Test Prec@1 46.92\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 46.875 (46.936)\nsup Test: [100/157]\t Prec@1 42.188 (47.540)\nsup Test: [150/157]\t Prec@1 48.438 (46.782)\nsup Test Prec@1 46.61\n[0/2][349/782] Loss_D: 1.93 Loss_G: 16.16 D(x): 0.89 D(G(z)): 0.47 / 0.08 | Acc 71.9 / 50.0\n[0/2][350/782] Loss_D: 2.34 Loss_G: 14.21 D(x): 0.58 D(G(z)): 0.23 / 0.54 | Acc 40.6 / 50.0\n[0/2][351/782] Loss_D: 2.28 Loss_G: 16.18 D(x): 0.96 D(G(z)): 0.71 / 0.08 | Acc 73.4 / 50.1\n[0/2][352/782] Loss_D: 2.10 Loss_G: 14.75 D(x): 0.70 D(G(z)): 0.19 / 0.29 | Acc 39.1 / 50.0\n[0/2][353/782] Loss_D: 2.32 Loss_G: 14.90 D(x): 0.78 D(G(z)): 0.49 / 0.31 | Acc 67.2 / 50.1\n[0/2][354/782] Loss_D: 2.17 Loss_G: 15.57 D(x): 0.86 D(G(z)): 0.49 / 0.12 | Acc 59.4 / 50.1\n[0/2][355/782] Loss_D: 1.90 Loss_G: 14.70 D(x): 0.72 D(G(z)): 0.26 / 0.32 | Acc 57.8 / 50.1\n[0/2][356/782] Loss_D: 1.91 Loss_G: 15.47 D(x): 0.91 D(G(z)): 0.51 / 0.14 | Acc 70.3 / 50.2\n[0/2][357/782] Loss_D: 2.04 Loss_G: 14.73 D(x): 0.73 D(G(z)): 0.29 / 0.25 | Acc 56.2 / 50.2\n[0/2][358/782] Loss_D: 2.02 Loss_G: 15.36 D(x): 0.86 D(G(z)): 0.44 / 0.13 | Acc 60.9 / 50.2\nsemi Test: [0/157]\t Prec@1 50.000 (50.000)\nsemi Test: [50/157]\t Prec@1 46.875 (49.326)\nsemi Test: [100/157]\t Prec@1 45.312 (49.041)\nsemi Test: [150/157]\t Prec@1 48.438 (49.193)\nsemi Test Prec@1 49.09\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 48.438 (47.518)\nsup Test: [100/157]\t Prec@1 39.062 (47.386)\nsup Test: [150/157]\t Prec@1 48.438 (46.968)\nsup Test Prec@1 46.79\n[0/2][359/782] Loss_D: 1.98 Loss_G: 14.47 D(x): 0.73 D(G(z)): 0.29 / 0.26 | Acc 59.4 / 50.2\n[0/2][360/782] Loss_D: 1.80 Loss_G: 15.37 D(x): 0.88 D(G(z)): 0.46 / 0.14 | Acc 68.8 / 50.3\n[0/2][361/782] Loss_D: 1.93 Loss_G: 14.53 D(x): 0.75 D(G(z)): 0.32 / 0.31 | Acc 57.8 / 50.3\n[0/2][362/782] Loss_D: 2.00 Loss_G: 15.42 D(x): 0.86 D(G(z)): 0.50 / 0.10 | Acc 67.2 / 50.4\n[0/2][363/782] Loss_D: 1.92 Loss_G: 14.42 D(x): 0.73 D(G(z)): 0.23 / 0.32 | Acc 64.1 / 50.4\n[0/2][364/782] Loss_D: 1.81 Loss_G: 15.73 D(x): 0.88 D(G(z)): 0.51 / 0.09 | Acc 78.1 / 50.5\n[0/2][365/782] Loss_D: 1.74 Loss_G: 14.82 D(x): 0.76 D(G(z)): 0.20 / 0.25 | Acc 53.1 / 50.5\n[0/2][366/782] Loss_D: 1.92 Loss_G: 15.24 D(x): 0.83 D(G(z)): 0.46 / 0.17 | Acc 70.3 / 50.5\n[0/2][367/782] Loss_D: 1.76 Loss_G: 15.44 D(x): 0.83 D(G(z)): 0.37 / 0.13 | Acc 64.1 / 50.6\n[0/2][368/782] Loss_D: 1.82 Loss_G: 14.48 D(x): 0.78 D(G(z)): 0.27 / 0.29 | Acc 60.9 / 50.6\nsemi Test: [0/157]\t Prec@1 48.438 (48.438)\nsemi Test: [50/157]\t Prec@1 56.250 (48.928)\nsemi Test: [100/157]\t Prec@1 46.875 (48.994)\nsemi Test: [150/157]\t Prec@1 43.750 (48.882)\nsemi Test Prec@1 48.72\nsup Test: [0/157]\t Prec@1 43.750 (43.750)\nsup Test: [50/157]\t Prec@1 45.312 (47.672)\nsup Test: [100/157]\t Prec@1 39.062 (47.587)\nsup Test: [150/157]\t Prec@1 45.312 (47.103)\nsup Test Prec@1 46.90\n[0/2][369/782] Loss_D: 1.93 Loss_G: 15.73 D(x): 0.89 D(G(z)): 0.52 / 0.09 | Acc 76.6 / 50.7\n[0/2][370/782] Loss_D: 2.00 Loss_G: 13.77 D(x): 0.61 D(G(z)): 0.21 / 0.48 | Acc 57.8 / 50.7\n[0/2][371/782] Loss_D: 2.32 Loss_G: 16.77 D(x): 0.93 D(G(z)): 0.70 / 0.03 | Acc 75.0 / 50.8\n[0/2][372/782] Loss_D: 2.43 Loss_G: 13.58 D(x): 0.50 D(G(z)): 0.09 / 0.65 | Acc 39.1 / 50.7\n[0/2][373/782] Loss_D: 2.78 Loss_G: 15.58 D(x): 0.97 D(G(z)): 0.80 / 0.11 | Acc 73.4 / 50.8\n[0/2][374/782] Loss_D: 2.05 Loss_G: 14.64 D(x): 0.72 D(G(z)): 0.24 / 0.19 | Acc 45.3 / 50.8\n[0/2][375/782] Loss_D: 1.79 Loss_G: 14.40 D(x): 0.79 D(G(z)): 0.36 / 0.25 | Acc 67.2 / 50.8\n[0/2][376/782] Loss_D: 2.29 Loss_G: 14.86 D(x): 0.79 D(G(z)): 0.47 / 0.19 | Acc 59.4 / 50.8\n[0/2][377/782] Loss_D: 1.94 Loss_G: 14.57 D(x): 0.71 D(G(z)): 0.39 / 0.25 | Acc 60.9 / 50.9\n[0/2][378/782] Loss_D: 2.16 Loss_G: 14.08 D(x): 0.74 D(G(z)): 0.43 / 0.28 | Acc 53.1 / 50.9\nsemi Test: [0/157]\t Prec@1 46.875 (46.875)\nsemi Test: [50/157]\t Prec@1 57.812 (48.438)\nsemi Test: [100/157]\t Prec@1 42.188 (48.329)\nsemi Test: [150/157]\t Prec@1 45.312 (48.406)\nsemi Test Prec@1 48.26\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 42.188 (46.446)\nsup Test: [100/157]\t Prec@1 43.750 (46.612)\nsup Test: [150/157]\t Prec@1 53.125 (46.471)\nsup Test Prec@1 46.25\n[0/2][379/782] Loss_D: 1.95 Loss_G: 14.69 D(x): 0.81 D(G(z)): 0.43 / 0.20 | Acc 65.6 / 50.9\n[0/2][380/782] Loss_D: 1.79 Loss_G: 14.95 D(x): 0.80 D(G(z)): 0.35 / 0.17 | Acc 65.6 / 51.0\n[0/2][381/782] Loss_D: 1.84 Loss_G: 14.87 D(x): 0.82 D(G(z)): 0.34 / 0.16 | Acc 65.6 / 51.0\n[0/2][382/782] Loss_D: 1.86 Loss_G: 14.88 D(x): 0.80 D(G(z)): 0.35 / 0.17 | Acc 60.9 / 51.0\n[0/2][383/782] Loss_D: 1.84 Loss_G: 14.85 D(x): 0.82 D(G(z)): 0.37 / 0.21 | Acc 59.4 / 51.0\n[0/2][384/782] Loss_D: 2.29 Loss_G: 14.10 D(x): 0.71 D(G(z)): 0.40 / 0.35 | Acc 51.6 / 51.0\n[0/2][385/782] Loss_D: 2.18 Loss_G: 15.90 D(x): 0.87 D(G(z)): 0.54 / 0.07 | Acc 67.2 / 51.1\n[0/2][386/782] Loss_D: 2.06 Loss_G: 13.55 D(x): 0.56 D(G(z)): 0.16 / 0.58 | Acc 54.7 / 51.1\n[0/2][387/782] Loss_D: 2.58 Loss_G: 15.44 D(x): 0.93 D(G(z)): 0.75 / 0.08 | Acc 68.8 / 51.1\n[0/2][388/782] Loss_D: 2.00 Loss_G: 14.45 D(x): 0.70 D(G(z)): 0.17 / 0.24 | Acc 51.6 / 51.1\nsemi Test: [0/157]\t Prec@1 42.188 (42.188)\nsemi Test: [50/157]\t Prec@1 53.125 (48.407)\nsemi Test: [100/157]\t Prec@1 45.312 (48.422)\nsemi Test: [150/157]\t Prec@1 40.625 (48.086)\nsemi Test Prec@1 47.91\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 43.750 (46.507)\nsup Test: [100/157]\t Prec@1 42.188 (47.092)\nsup Test: [150/157]\t Prec@1 48.438 (46.678)\nsup Test Prec@1 46.57\n[0/2][389/782] Loss_D: 1.79 Loss_G: 14.73 D(x): 0.85 D(G(z)): 0.42 / 0.19 | Acc 65.6 / 51.2\n[0/2][390/782] Loss_D: 1.84 Loss_G: 14.65 D(x): 0.81 D(G(z)): 0.36 / 0.20 | Acc 60.9 / 51.2\n[0/2][391/782] Loss_D: 2.07 Loss_G: 14.20 D(x): 0.81 D(G(z)): 0.38 / 0.26 | Acc 50.0 / 51.2\n[0/2][392/782] Loss_D: 1.92 Loss_G: 14.95 D(x): 0.81 D(G(z)): 0.46 / 0.15 | Acc 75.0 / 51.3\n[0/2][393/782] Loss_D: 1.67 Loss_G: 14.04 D(x): 0.76 D(G(z)): 0.31 / 0.26 | Acc 67.2 / 51.3\n[0/2][394/782] Loss_D: 1.72 Loss_G: 14.93 D(x): 0.84 D(G(z)): 0.45 / 0.14 | Acc 75.0 / 51.4\n[0/2][395/782] Loss_D: 1.93 Loss_G: 13.62 D(x): 0.74 D(G(z)): 0.27 / 0.40 | Acc 48.4 / 51.3\n[0/2][396/782] Loss_D: 2.11 Loss_G: 15.39 D(x): 0.89 D(G(z)): 0.57 / 0.08 | Acc 65.6 / 51.4\n[0/2][397/782] Loss_D: 1.89 Loss_G: 13.70 D(x): 0.69 D(G(z)): 0.17 / 0.36 | Acc 50.0 / 51.4\n[0/2][398/782] Loss_D: 2.13 Loss_G: 15.12 D(x): 0.87 D(G(z)): 0.58 / 0.13 | Acc 73.4 / 51.4\nsemi Test: [0/157]\t Prec@1 45.312 (45.312)\nsemi Test: [50/157]\t Prec@1 51.562 (47.702)\nsemi Test: [100/157]\t Prec@1 40.625 (48.082)\nsemi Test: [150/157]\t Prec@1 46.875 (48.375)\nsemi Test Prec@1 48.24\nsup Test: [0/157]\t Prec@1 35.938 (35.938)\nsup Test: [50/157]\t Prec@1 53.125 (47.396)\nsup Test: [100/157]\t Prec@1 42.188 (47.819)\nsup Test: [150/157]\t Prec@1 50.000 (47.475)\nsup Test Prec@1 47.26\n[0/2][399/782] Loss_D: 1.97 Loss_G: 13.83 D(x): 0.70 D(G(z)): 0.24 / 0.34 | Acc 59.4 / 51.5\n[0/2][400/782] Loss_D: 2.09 Loss_G: 14.65 D(x): 0.83 D(G(z)): 0.53 / 0.15 | Acc 64.1 / 51.5\n[0/2][401/782] Loss_D: 1.77 Loss_G: 14.33 D(x): 0.71 D(G(z)): 0.31 / 0.24 | Acc 64.1 / 51.5\n[0/2][402/782] Loss_D: 1.86 Loss_G: 14.76 D(x): 0.79 D(G(z)): 0.43 / 0.16 | Acc 65.6 / 51.6\n[0/2][403/782] Loss_D: 1.83 Loss_G: 14.12 D(x): 0.73 D(G(z)): 0.35 / 0.29 | Acc 68.8 / 51.6\n[0/2][404/782] Loss_D: 1.98 Loss_G: 14.93 D(x): 0.83 D(G(z)): 0.50 / 0.11 | Acc 70.3 / 51.6\n[0/2][405/782] Loss_D: 1.77 Loss_G: 13.89 D(x): 0.71 D(G(z)): 0.27 / 0.27 | Acc 64.1 / 51.7\n[0/2][406/782] Loss_D: 1.91 Loss_G: 15.14 D(x): 0.85 D(G(z)): 0.50 / 0.10 | Acc 65.6 / 51.7\n[0/2][407/782] Loss_D: 2.37 Loss_G: 12.86 D(x): 0.60 D(G(z)): 0.25 / 0.65 | Acc 39.1 / 51.7\n[0/2][408/782] Loss_D: 2.72 Loss_G: 16.40 D(x): 0.96 D(G(z)): 0.81 / 0.04 | Acc 76.6 / 51.7\nsemi Test: [0/157]\t Prec@1 50.000 (50.000)\nsemi Test: [50/157]\t Prec@1 45.312 (46.906)\nsemi Test: [100/157]\t Prec@1 42.188 (46.829)\nsemi Test: [150/157]\t Prec@1 42.188 (46.554)\nsemi Test Prec@1 46.50\nsup Test: [0/157]\t Prec@1 35.938 (35.938)\nsup Test: [50/157]\t Prec@1 40.625 (46.415)\nsup Test: [100/157]\t Prec@1 45.312 (46.566)\nsup Test: [150/157]\t Prec@1 48.438 (46.233)\nsup Test Prec@1 46.08\n[0/2][409/782] Loss_D: 2.63 Loss_G: 13.35 D(x): 0.50 D(G(z)): 0.10 / 0.46 | Acc 42.2 / 51.7\n[0/2][410/782] Loss_D: 2.20 Loss_G: 14.82 D(x): 0.95 D(G(z)): 0.63 / 0.13 | Acc 60.9 / 51.7\n[0/2][411/782] Loss_D: 1.70 Loss_G: 14.91 D(x): 0.81 D(G(z)): 0.28 / 0.12 | Acc 65.6 / 51.8\n[0/2][412/782] Loss_D: 1.87 Loss_G: 13.53 D(x): 0.76 D(G(z)): 0.28 / 0.37 | Acc 56.2 / 51.8\n[0/2][413/782] Loss_D: 2.03 Loss_G: 14.84 D(x): 0.87 D(G(z)): 0.59 / 0.14 | Acc 73.4 / 51.8\n[0/2][414/782] Loss_D: 1.84 Loss_G: 14.04 D(x): 0.74 D(G(z)): 0.28 / 0.23 | Acc 53.1 / 51.8\n[0/2][415/782] Loss_D: 1.94 Loss_G: 14.23 D(x): 0.83 D(G(z)): 0.42 / 0.18 | Acc 62.5 / 51.9\n[0/2][416/782] Loss_D: 1.98 Loss_G: 14.26 D(x): 0.76 D(G(z)): 0.37 / 0.21 | Acc 64.1 / 51.9\n[0/2][417/782] Loss_D: 1.82 Loss_G: 14.26 D(x): 0.81 D(G(z)): 0.40 / 0.21 | Acc 65.6 / 51.9\n[0/2][418/782] Loss_D: 1.86 Loss_G: 14.26 D(x): 0.78 D(G(z)): 0.41 / 0.18 | Acc 70.3 / 52.0\nsemi Test: [0/157]\t Prec@1 54.688 (54.688)\nsemi Test: [50/157]\t Prec@1 43.750 (49.112)\nsemi Test: [100/157]\t Prec@1 35.938 (48.948)\nsemi Test: [150/157]\t Prec@1 46.875 (48.810)\nsemi Test Prec@1 48.61\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 53.125 (47.365)\nsup Test: [100/157]\t Prec@1 45.312 (47.184)\nsup Test: [150/157]\t Prec@1 53.125 (46.999)\nsup Test Prec@1 46.84\n[0/2][419/782] Loss_D: 1.81 Loss_G: 13.95 D(x): 0.75 D(G(z)): 0.33 / 0.23 | Acc 73.4 / 52.0\n[0/2][420/782] Loss_D: 1.86 Loss_G: 14.12 D(x): 0.75 D(G(z)): 0.45 / 0.18 | Acc 75.0 / 52.1\n[0/2][421/782] Loss_D: 1.84 Loss_G: 14.18 D(x): 0.78 D(G(z)): 0.37 / 0.20 | Acc 60.9 / 52.1\n[0/2][422/782] Loss_D: 2.07 Loss_G: 13.50 D(x): 0.70 D(G(z)): 0.42 / 0.31 | Acc 62.5 / 52.1\n[0/2][423/782] Loss_D: 2.03 Loss_G: 14.67 D(x): 0.86 D(G(z)): 0.52 / 0.10 | Acc 70.3 / 52.2\n[0/2][424/782] Loss_D: 1.88 Loss_G: 13.61 D(x): 0.70 D(G(z)): 0.27 / 0.31 | Acc 64.1 / 52.2\n[0/2][425/782] Loss_D: 1.64 Loss_G: 14.89 D(x): 0.87 D(G(z)): 0.52 / 0.09 | Acc 84.4 / 52.3\n[0/2][426/782] Loss_D: 2.03 Loss_G: 13.12 D(x): 0.64 D(G(z)): 0.23 / 0.48 | Acc 48.4 / 52.3\n[0/2][427/782] Loss_D: 2.13 Loss_G: 15.05 D(x): 0.92 D(G(z)): 0.66 / 0.06 | Acc 71.9 / 52.3\n[0/2][428/782] Loss_D: 1.93 Loss_G: 13.15 D(x): 0.60 D(G(z)): 0.17 / 0.34 | Acc 56.2 / 52.3\nsemi Test: [0/157]\t Prec@1 54.688 (54.688)\nsemi Test: [50/157]\t Prec@1 45.312 (47.794)\nsemi Test: [100/157]\t Prec@1 35.938 (47.509)\nsemi Test: [150/157]\t Prec@1 45.312 (47.651)\nsemi Test Prec@1 47.45\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 53.125 (47.426)\nsup Test: [100/157]\t Prec@1 39.062 (47.710)\nsup Test: [150/157]\t Prec@1 50.000 (47.341)\nsup Test Prec@1 47.25\n[0/2][429/782] Loss_D: 2.07 Loss_G: 14.66 D(x): 0.92 D(G(z)): 0.56 / 0.11 | Acc 60.9 / 52.3\n[0/2][430/782] Loss_D: 2.05 Loss_G: 13.43 D(x): 0.65 D(G(z)): 0.25 / 0.40 | Acc 59.4 / 52.3\n[0/2][431/782] Loss_D: 2.19 Loss_G: 14.90 D(x): 0.91 D(G(z)): 0.60 / 0.09 | Acc 70.3 / 52.4\n[0/2][432/782] Loss_D: 1.87 Loss_G: 13.63 D(x): 0.73 D(G(z)): 0.21 / 0.26 | Acc 53.1 / 52.4\n[0/2][433/782] Loss_D: 1.79 Loss_G: 13.89 D(x): 0.82 D(G(z)): 0.41 / 0.21 | Acc 65.6 / 52.4\n[0/2][434/782] Loss_D: 1.68 Loss_G: 14.31 D(x): 0.84 D(G(z)): 0.39 / 0.14 | Acc 64.1 / 52.4\n[0/2][435/782] Loss_D: 1.80 Loss_G: 13.81 D(x): 0.79 D(G(z)): 0.29 / 0.20 | Acc 64.1 / 52.5\n[0/2][436/782] Loss_D: 1.86 Loss_G: 14.01 D(x): 0.77 D(G(z)): 0.39 / 0.19 | Acc 71.9 / 52.5\n[0/2][437/782] Loss_D: 1.66 Loss_G: 14.26 D(x): 0.77 D(G(z)): 0.36 / 0.17 | Acc 81.2 / 52.6\n[0/2][438/782] Loss_D: 2.06 Loss_G: 13.30 D(x): 0.78 D(G(z)): 0.36 / 0.31 | Acc 53.1 / 52.6\nsemi Test: [0/157]\t Prec@1 50.000 (50.000)\nsemi Test: [50/157]\t Prec@1 51.562 (47.365)\nsemi Test: [100/157]\t Prec@1 42.188 (47.246)\nsemi Test: [150/157]\t Prec@1 40.625 (47.868)\nsemi Test Prec@1 47.76\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 48.438 (47.917)\nsup Test: [100/157]\t Prec@1 42.188 (47.912)\nsup Test: [150/157]\t Prec@1 50.000 (47.351)\nsup Test Prec@1 47.26\n[0/2][439/782] Loss_D: 2.01 Loss_G: 14.05 D(x): 0.76 D(G(z)): 0.49 / 0.20 | Acc 73.4 / 52.6\n[0/2][440/782] Loss_D: 1.61 Loss_G: 14.39 D(x): 0.83 D(G(z)): 0.36 / 0.12 | Acc 68.8 / 52.7\n[0/2][441/782] Loss_D: 1.83 Loss_G: 12.77 D(x): 0.71 D(G(z)): 0.25 / 0.46 | Acc 59.4 / 52.7\n[0/2][442/782] Loss_D: 2.03 Loss_G: 15.03 D(x): 0.92 D(G(z)): 0.64 / 0.06 | Acc 76.6 / 52.7\n[0/2][443/782] Loss_D: 2.60 Loss_G: 12.48 D(x): 0.60 D(G(z)): 0.14 / 0.71 | Acc 29.7 / 52.7\n[0/2][444/782] Loss_D: 2.82 Loss_G: 14.50 D(x): 0.95 D(G(z)): 0.81 / 0.11 | Acc 78.1 / 52.7\n[0/2][445/782] Loss_D: 1.86 Loss_G: 13.59 D(x): 0.73 D(G(z)): 0.21 / 0.23 | Acc 57.8 / 52.8\n[0/2][446/782] Loss_D: 1.70 Loss_G: 13.36 D(x): 0.83 D(G(z)): 0.39 / 0.26 | Acc 73.4 / 52.8\n[0/2][447/782] Loss_D: 1.82 Loss_G: 14.07 D(x): 0.84 D(G(z)): 0.43 / 0.17 | Acc 65.6 / 52.8\n[0/2][448/782] Loss_D: 1.77 Loss_G: 13.34 D(x): 0.73 D(G(z)): 0.30 / 0.30 | Acc 70.3 / 52.9\nsemi Test: [0/157]\t Prec@1 53.125 (53.125)\nsemi Test: [50/157]\t Prec@1 45.312 (48.131)\nsemi Test: [100/157]\t Prec@1 43.750 (48.205)\nsemi Test: [150/157]\t Prec@1 50.000 (48.541)\nsemi Test Prec@1 48.42\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 50.000 (47.518)\nsup Test: [100/157]\t Prec@1 45.312 (48.004)\nsup Test: [150/157]\t Prec@1 53.125 (47.599)\nsup Test Prec@1 47.60\n[0/2][449/782] Loss_D: 1.65 Loss_G: 14.52 D(x): 0.90 D(G(z)): 0.48 / 0.11 | Acc 79.7 / 52.9\n[0/2][450/782] Loss_D: 1.85 Loss_G: 13.07 D(x): 0.69 D(G(z)): 0.23 / 0.37 | Acc 60.9 / 52.9\n[0/2][451/782] Loss_D: 1.87 Loss_G: 14.07 D(x): 0.89 D(G(z)): 0.54 / 0.15 | Acc 76.6 / 53.0\n[0/2][452/782] Loss_D: 1.72 Loss_G: 13.16 D(x): 0.74 D(G(z)): 0.27 / 0.31 | Acc 57.8 / 53.0\n[0/2][453/782] Loss_D: 1.68 Loss_G: 13.87 D(x): 0.86 D(G(z)): 0.47 / 0.17 | Acc 76.6 / 53.1\n[0/2][454/782] Loss_D: 1.85 Loss_G: 13.49 D(x): 0.77 D(G(z)): 0.32 / 0.25 | Acc 64.1 / 53.1\n[0/2][455/782] Loss_D: 1.62 Loss_G: 13.81 D(x): 0.83 D(G(z)): 0.47 / 0.16 | Acc 79.7 / 53.1\n[0/2][456/782] Loss_D: 1.84 Loss_G: 12.84 D(x): 0.66 D(G(z)): 0.31 / 0.38 | Acc 64.1 / 53.2\n[0/2][457/782] Loss_D: 1.72 Loss_G: 14.34 D(x): 0.87 D(G(z)): 0.54 / 0.11 | Acc 85.9 / 53.2\n[0/2][458/782] Loss_D: 1.99 Loss_G: 12.66 D(x): 0.66 D(G(z)): 0.20 / 0.40 | Acc 48.4 / 53.2\nsemi Test: [0/157]\t Prec@1 50.000 (50.000)\nsemi Test: [50/157]\t Prec@1 45.312 (49.020)\nsemi Test: [100/157]\t Prec@1 42.188 (48.963)\nsemi Test: [150/157]\t Prec@1 45.312 (48.976)\nsemi Test Prec@1 48.95\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 53.125 (47.518)\nsup Test: [100/157]\t Prec@1 45.312 (47.695)\nsup Test: [150/157]\t Prec@1 51.562 (47.216)\nsup Test Prec@1 47.16\n[0/2][459/782] Loss_D: 1.86 Loss_G: 14.26 D(x): 0.91 D(G(z)): 0.57 / 0.11 | Acc 82.8 / 53.3\n[0/2][460/782] Loss_D: 1.69 Loss_G: 13.29 D(x): 0.74 D(G(z)): 0.24 / 0.23 | Acc 67.2 / 53.3\n[0/2][461/782] Loss_D: 1.75 Loss_G: 13.76 D(x): 0.85 D(G(z)): 0.43 / 0.19 | Acc 73.4 / 53.4\n[0/2][462/782] Loss_D: 1.64 Loss_G: 13.91 D(x): 0.83 D(G(z)): 0.37 / 0.16 | Acc 70.3 / 53.4\n[0/2][463/782] Loss_D: 1.73 Loss_G: 12.95 D(x): 0.76 D(G(z)): 0.34 / 0.27 | Acc 71.9 / 53.4\n[0/2][464/782] Loss_D: 1.72 Loss_G: 13.99 D(x): 0.84 D(G(z)): 0.46 / 0.16 | Acc 75.0 / 53.5\n[0/2][465/782] Loss_D: 2.11 Loss_G: 12.63 D(x): 0.71 D(G(z)): 0.31 / 0.46 | Acc 51.6 / 53.5\n[0/2][466/782] Loss_D: 2.04 Loss_G: 14.69 D(x): 0.90 D(G(z)): 0.65 / 0.06 | Acc 78.1 / 53.5\n[0/2][467/782] Loss_D: 2.09 Loss_G: 12.69 D(x): 0.58 D(G(z)): 0.13 / 0.46 | Acc 50.0 / 53.5\n[0/2][468/782] Loss_D: 1.91 Loss_G: 14.57 D(x): 0.95 D(G(z)): 0.64 / 0.08 | Acc 81.2 / 53.6\nsemi Test: [0/157]\t Prec@1 46.875 (46.875)\nsemi Test: [50/157]\t Prec@1 45.312 (47.580)\nsemi Test: [100/157]\t Prec@1 45.312 (48.422)\nsemi Test: [150/157]\t Prec@1 48.438 (48.179)\nsemi Test Prec@1 47.94\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 48.438 (47.028)\nsup Test: [100/157]\t Prec@1 48.438 (47.602)\nsup Test: [150/157]\t Prec@1 51.562 (47.289)\nsup Test Prec@1 47.11\n[0/2][469/782] Loss_D: 1.69 Loss_G: 13.23 D(x): 0.71 D(G(z)): 0.17 / 0.25 | Acc 62.5 / 53.6\n[0/2][470/782] Loss_D: 1.69 Loss_G: 13.83 D(x): 0.86 D(G(z)): 0.44 / 0.16 | Acc 76.6 / 53.7\n[0/2][471/782] Loss_D: 1.59 Loss_G: 13.66 D(x): 0.79 D(G(z)): 0.31 / 0.16 | Acc 78.1 / 53.7\n[0/2][472/782] Loss_D: 1.54 Loss_G: 13.36 D(x): 0.79 D(G(z)): 0.34 / 0.22 | Acc 73.4 / 53.8\n[0/2][473/782] Loss_D: 1.74 Loss_G: 13.66 D(x): 0.79 D(G(z)): 0.44 / 0.15 | Acc 75.0 / 53.8\n[0/2][474/782] Loss_D: 1.87 Loss_G: 12.71 D(x): 0.71 D(G(z)): 0.31 / 0.40 | Acc 57.8 / 53.8\n[0/2][475/782] Loss_D: 2.01 Loss_G: 14.26 D(x): 0.88 D(G(z)): 0.59 / 0.09 | Acc 71.9 / 53.8\n[0/2][476/782] Loss_D: 1.79 Loss_G: 12.76 D(x): 0.62 D(G(z)): 0.19 / 0.35 | Acc 67.2 / 53.9\n[0/2][477/782] Loss_D: 2.00 Loss_G: 13.43 D(x): 0.88 D(G(z)): 0.50 / 0.21 | Acc 65.6 / 53.9\n[0/2][478/782] Loss_D: 1.97 Loss_G: 13.39 D(x): 0.76 D(G(z)): 0.36 / 0.21 | Acc 54.7 / 53.9\nsemi Test: [0/157]\t Prec@1 51.562 (51.562)\nsemi Test: [50/157]\t Prec@1 53.125 (48.866)\nsemi Test: [100/157]\t Prec@1 45.312 (48.933)\nsemi Test: [150/157]\t Prec@1 48.438 (48.965)\nsemi Test Prec@1 48.79\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 53.125 (47.733)\nsup Test: [100/157]\t Prec@1 39.062 (47.819)\nsup Test: [150/157]\t Prec@1 48.438 (47.289)\nsup Test Prec@1 47.19\n[0/2][479/782] Loss_D: 1.59 Loss_G: 13.28 D(x): 0.78 D(G(z)): 0.38 / 0.20 | Acc 79.7 / 54.0\n[0/2][480/782] Loss_D: 1.79 Loss_G: 13.20 D(x): 0.78 D(G(z)): 0.35 / 0.25 | Acc 64.1 / 54.0\n[0/2][481/782] Loss_D: 1.80 Loss_G: 14.07 D(x): 0.85 D(G(z)): 0.45 / 0.13 | Acc 68.8 / 54.0\n[0/2][482/782] Loss_D: 2.04 Loss_G: 12.71 D(x): 0.70 D(G(z)): 0.27 / 0.39 | Acc 51.6 / 54.0\n[0/2][483/782] Loss_D: 1.82 Loss_G: 14.06 D(x): 0.88 D(G(z)): 0.56 / 0.12 | Acc 78.1 / 54.0\n[0/2][484/782] Loss_D: 1.83 Loss_G: 12.75 D(x): 0.68 D(G(z)): 0.26 / 0.33 | Acc 54.7 / 54.0\n[0/2][485/782] Loss_D: 1.86 Loss_G: 13.87 D(x): 0.85 D(G(z)): 0.53 / 0.14 | Acc 81.2 / 54.1\n[0/2][486/782] Loss_D: 1.72 Loss_G: 13.16 D(x): 0.74 D(G(z)): 0.28 / 0.25 | Acc 60.9 / 54.1\n[0/2][487/782] Loss_D: 1.63 Loss_G: 14.02 D(x): 0.85 D(G(z)): 0.44 / 0.14 | Acc 78.1 / 54.2\n[0/2][488/782] Loss_D: 1.53 Loss_G: 13.08 D(x): 0.78 D(G(z)): 0.28 / 0.23 | Acc 71.9 / 54.2\nsemi Test: [0/157]\t Prec@1 43.750 (43.750)\nsemi Test: [50/157]\t Prec@1 43.750 (46.538)\nsemi Test: [100/157]\t Prec@1 40.625 (46.999)\nsemi Test: [150/157]\t Prec@1 42.188 (47.134)\nsemi Test Prec@1 46.93\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 51.562 (47.151)\nsup Test: [100/157]\t Prec@1 39.062 (47.092)\nsup Test: [150/157]\t Prec@1 45.312 (46.834)\nsup Test Prec@1 46.75\n[0/2][489/782] Loss_D: 1.99 Loss_G: 13.57 D(x): 0.79 D(G(z)): 0.43 / 0.19 | Acc 64.1 / 54.2\n[0/2][490/782] Loss_D: 2.07 Loss_G: 13.33 D(x): 0.76 D(G(z)): 0.36 / 0.24 | Acc 59.4 / 54.2\n[0/2][491/782] Loss_D: 1.71 Loss_G: 14.15 D(x): 0.84 D(G(z)): 0.43 / 0.11 | Acc 70.3 / 54.3\n[0/2][492/782] Loss_D: 1.65 Loss_G: 12.79 D(x): 0.72 D(G(z)): 0.23 / 0.37 | Acc 57.8 / 54.3\n[0/2][493/782] Loss_D: 1.93 Loss_G: 14.56 D(x): 0.91 D(G(z)): 0.56 / 0.06 | Acc 70.3 / 54.3\n[0/2][494/782] Loss_D: 2.21 Loss_G: 11.95 D(x): 0.66 D(G(z)): 0.15 / 0.71 | Acc 35.9 / 54.3\n[0/2][495/782] Loss_D: 2.95 Loss_G: 14.65 D(x): 0.97 D(G(z)): 0.81 / 0.06 | Acc 70.3 / 54.3\n[0/2][496/782] Loss_D: 1.80 Loss_G: 13.22 D(x): 0.65 D(G(z)): 0.15 / 0.26 | Acc 54.7 / 54.3\n[0/2][497/782] Loss_D: 1.82 Loss_G: 12.90 D(x): 0.87 D(G(z)): 0.44 / 0.25 | Acc 71.9 / 54.3\n[0/2][498/782] Loss_D: 1.67 Loss_G: 13.60 D(x): 0.87 D(G(z)): 0.43 / 0.13 | Acc 82.8 / 54.4\nsemi Test: [0/157]\t Prec@1 46.875 (46.875)\nsemi Test: [50/157]\t Prec@1 48.438 (47.089)\nsemi Test: [100/157]\t Prec@1 43.750 (47.293)\nsemi Test: [150/157]\t Prec@1 46.875 (47.672)\nsemi Test Prec@1 47.48\nsup Test: [0/157]\t Prec@1 43.750 (43.750)\nsup Test: [50/157]\t Prec@1 48.438 (47.212)\nsup Test: [100/157]\t Prec@1 42.188 (47.447)\nsup Test: [150/157]\t Prec@1 42.188 (47.061)\nsup Test Prec@1 46.84\n[0/2][499/782] Loss_D: 1.84 Loss_G: 12.40 D(x): 0.67 D(G(z)): 0.28 / 0.41 | Acc 60.9 / 54.4\n[0/2][500/782] Loss_D: 2.11 Loss_G: 13.99 D(x): 0.91 D(G(z)): 0.60 / 0.10 | Acc 65.6 / 54.4\n[0/2][501/782] Loss_D: 2.24 Loss_G: 12.54 D(x): 0.61 D(G(z)): 0.20 / 0.39 | Acc 46.9 / 54.4\n[0/2][502/782] Loss_D: 1.91 Loss_G: 13.47 D(x): 0.89 D(G(z)): 0.54 / 0.13 | Acc 76.6 / 54.5\n[0/2][503/782] Loss_D: 1.59 Loss_G: 13.17 D(x): 0.80 D(G(z)): 0.23 / 0.19 | Acc 59.4 / 54.5\n[0/2][504/782] Loss_D: 1.68 Loss_G: 13.12 D(x): 0.83 D(G(z)): 0.33 / 0.18 | Acc 73.4 / 54.5\n[0/2][505/782] Loss_D: 1.56 Loss_G: 13.42 D(x): 0.82 D(G(z)): 0.31 / 0.18 | Acc 67.2 / 54.5\n[0/2][506/782] Loss_D: 1.39 Loss_G: 13.14 D(x): 0.84 D(G(z)): 0.32 / 0.20 | Acc 67.2 / 54.6\n[0/2][507/782] Loss_D: 1.65 Loss_G: 13.16 D(x): 0.87 D(G(z)): 0.38 / 0.16 | Acc 64.1 / 54.6\n[0/2][508/782] Loss_D: 1.72 Loss_G: 12.62 D(x): 0.72 D(G(z)): 0.31 / 0.29 | Acc 70.3 / 54.6\nsemi Test: [0/157]\t Prec@1 40.625 (40.625)\nsemi Test: [50/157]\t Prec@1 50.000 (48.407)\nsemi Test: [100/157]\t Prec@1 42.188 (48.639)\nsemi Test: [150/157]\t Prec@1 46.875 (48.913)\nsemi Test Prec@1 48.71\nsup Test: [0/157]\t Prec@1 46.875 (46.875)\nsup Test: [50/157]\t Prec@1 45.312 (47.580)\nsup Test: [100/157]\t Prec@1 43.750 (47.587)\nsup Test: [150/157]\t Prec@1 46.875 (46.958)\nsup Test Prec@1 46.85\n[0/2][509/782] Loss_D: 1.62 Loss_G: 13.62 D(x): 0.85 D(G(z)): 0.47 / 0.12 | Acc 79.7 / 54.7\n[0/2][510/782] Loss_D: 1.85 Loss_G: 12.36 D(x): 0.71 D(G(z)): 0.24 / 0.41 | Acc 62.5 / 54.7\n[0/2][511/782] Loss_D: 1.55 Loss_G: 13.92 D(x): 0.92 D(G(z)): 0.54 / 0.09 | Acc 84.4 / 54.7\n[0/2][512/782] Loss_D: 1.84 Loss_G: 12.38 D(x): 0.72 D(G(z)): 0.19 / 0.41 | Acc 51.6 / 54.7\n[0/2][513/782] Loss_D: 1.90 Loss_G: 13.75 D(x): 0.90 D(G(z)): 0.60 / 0.12 | Acc 79.7 / 54.8\n[0/2][514/782] Loss_D: 1.51 Loss_G: 12.88 D(x): 0.74 D(G(z)): 0.22 / 0.23 | Acc 79.7 / 54.8\n[0/2][515/782] Loss_D: 1.70 Loss_G: 12.64 D(x): 0.81 D(G(z)): 0.41 / 0.25 | Acc 71.9 / 54.9\n[0/2][516/782] Loss_D: 1.42 Loss_G: 13.31 D(x): 0.81 D(G(z)): 0.40 / 0.16 | Acc 87.5 / 54.9\n[0/2][517/782] Loss_D: 1.58 Loss_G: 12.22 D(x): 0.70 D(G(z)): 0.29 / 0.36 | Acc 76.6 / 55.0\n[0/2][518/782] Loss_D: 1.76 Loss_G: 13.63 D(x): 0.87 D(G(z)): 0.50 / 0.11 | Acc 73.4 / 55.0\nsemi Test: [0/157]\t Prec@1 37.500 (37.500)\nsemi Test: [50/157]\t Prec@1 46.875 (46.569)\nsemi Test: [100/157]\t Prec@1 45.312 (46.782)\nsemi Test: [150/157]\t Prec@1 42.188 (46.606)\nsemi Test Prec@1 46.44\nsup Test: [0/157]\t Prec@1 43.750 (43.750)\nsup Test: [50/157]\t Prec@1 50.000 (47.825)\nsup Test: [100/157]\t Prec@1 48.438 (48.082)\nsup Test: [150/157]\t Prec@1 50.000 (47.672)\nsup Test Prec@1 47.60\n[0/2][519/782] Loss_D: 1.85 Loss_G: 12.09 D(x): 0.63 D(G(z)): 0.23 / 0.45 | Acc 59.4 / 55.0\n[0/2][520/782] Loss_D: 2.07 Loss_G: 13.94 D(x): 0.94 D(G(z)): 0.60 / 0.08 | Acc 75.0 / 55.0\n[0/2][521/782] Loss_D: 1.72 Loss_G: 12.28 D(x): 0.58 D(G(z)): 0.18 / 0.43 | Acc 70.3 / 55.1\n[0/2][522/782] Loss_D: 1.82 Loss_G: 13.75 D(x): 0.92 D(G(z)): 0.62 / 0.11 | Acc 84.4 / 55.1\n[0/2][523/782] Loss_D: 1.77 Loss_G: 12.09 D(x): 0.66 D(G(z)): 0.24 / 0.41 | Acc 67.2 / 55.1\n[0/2][524/782] Loss_D: 2.06 Loss_G: 13.48 D(x): 0.87 D(G(z)): 0.61 / 0.13 | Acc 70.3 / 55.2\n[0/2][525/782] Loss_D: 1.86 Loss_G: 12.59 D(x): 0.67 D(G(z)): 0.24 / 0.34 | Acc 60.9 / 55.2\n[0/2][526/782] Loss_D: 1.83 Loss_G: 13.17 D(x): 0.86 D(G(z)): 0.52 / 0.16 | Acc 75.0 / 55.2\n[0/2][527/782] Loss_D: 2.00 Loss_G: 12.60 D(x): 0.73 D(G(z)): 0.28 / 0.29 | Acc 56.2 / 55.2\n[0/2][528/782] Loss_D: 1.61 Loss_G: 13.63 D(x): 0.89 D(G(z)): 0.48 / 0.12 | Acc 78.1 / 55.3\nsemi Test: [0/157]\t Prec@1 42.188 (42.188)\nsemi Test: [50/157]\t Prec@1 42.188 (49.203)\nsemi Test: [100/157]\t Prec@1 42.188 (49.737)\nsemi Test: [150/157]\t Prec@1 48.438 (49.545)\nsemi Test Prec@1 49.40\nsup Test: [0/157]\t Prec@1 43.750 (43.750)\nsup Test: [50/157]\t Prec@1 51.562 (48.039)\nsup Test: [100/157]\t Prec@1 46.875 (48.453)\nsup Test: [150/157]\t Prec@1 48.438 (47.930)\nsup Test Prec@1 47.83\n[0/2][529/782] Loss_D: 1.59 Loss_G: 12.55 D(x): 0.71 D(G(z)): 0.24 / 0.30 | Acc 68.8 / 55.3\n[0/2][530/782] Loss_D: 1.51 Loss_G: 13.22 D(x): 0.86 D(G(z)): 0.47 / 0.14 | Acc 87.5 / 55.4\n[0/2][531/782] Loss_D: 1.89 Loss_G: 12.29 D(x): 0.77 D(G(z)): 0.29 / 0.32 | Acc 51.6 / 55.4\n[0/2][532/782] Loss_D: 1.78 Loss_G: 13.39 D(x): 0.86 D(G(z)): 0.50 / 0.11 | Acc 81.2 / 55.4\n[0/2][533/782] Loss_D: 1.71 Loss_G: 11.95 D(x): 0.66 D(G(z)): 0.24 / 0.41 | Acc 67.2 / 55.4\n[0/2][534/782] Loss_D: 1.71 Loss_G: 14.12 D(x): 0.91 D(G(z)): 0.62 / 0.06 | Acc 92.2 / 55.5\n[0/2][535/782] Loss_D: 2.26 Loss_G: 11.45 D(x): 0.57 D(G(z)): 0.14 / 0.58 | Acc 39.1 / 55.5\n[0/2][536/782] Loss_D: 2.34 Loss_G: 13.67 D(x): 0.94 D(G(z)): 0.72 / 0.09 | Acc 79.7 / 55.5\n[0/2][537/782] Loss_D: 1.62 Loss_G: 12.65 D(x): 0.70 D(G(z)): 0.21 / 0.24 | Acc 71.9 / 55.5\n[0/2][538/782] Loss_D: 1.62 Loss_G: 12.95 D(x): 0.86 D(G(z)): 0.43 / 0.16 | Acc 75.0 / 55.6\nsemi Test: [0/157]\t Prec@1 42.188 (42.188)\nsemi Test: [50/157]\t Prec@1 48.438 (48.499)\nsemi Test: [100/157]\t Prec@1 40.625 (48.979)\nsemi Test: [150/157]\t Prec@1 46.875 (49.172)\nsemi Test Prec@1 48.99\nsup Test: [0/157]\t Prec@1 43.750 (43.750)\nsup Test: [50/157]\t Prec@1 50.000 (48.162)\nsup Test: [100/157]\t Prec@1 45.312 (48.453)\nsup Test: [150/157]\t Prec@1 48.438 (48.055)\nsup Test Prec@1 47.90\n[0/2][539/782] Loss_D: 1.58 Loss_G: 12.90 D(x): 0.75 D(G(z)): 0.36 / 0.20 | Acc 78.1 / 55.6\n[0/2][540/782] Loss_D: 1.69 Loss_G: 13.04 D(x): 0.83 D(G(z)): 0.39 / 0.17 | Acc 76.6 / 55.7\n[0/2][541/782] Loss_D: 1.74 Loss_G: 12.67 D(x): 0.77 D(G(z)): 0.36 / 0.23 | Acc 68.8 / 55.7\n[0/2][542/782] Loss_D: 1.71 Loss_G: 12.69 D(x): 0.79 D(G(z)): 0.42 / 0.19 | Acc 76.6 / 55.7\n[0/2][543/782] Loss_D: 1.52 Loss_G: 12.80 D(x): 0.79 D(G(z)): 0.35 / 0.16 | Acc 79.7 / 55.8\n[0/2][544/782] Loss_D: 1.49 Loss_G: 12.38 D(x): 0.78 D(G(z)): 0.29 / 0.25 | Acc 75.0 / 55.8\n[0/2][545/782] Loss_D: 1.56 Loss_G: 13.58 D(x): 0.87 D(G(z)): 0.44 / 0.09 | Acc 79.7 / 55.8\n[0/2][546/782] Loss_D: 1.59 Loss_G: 12.26 D(x): 0.70 D(G(z)): 0.22 / 0.38 | Acc 68.8 / 55.9\n[0/2][547/782] Loss_D: 1.88 Loss_G: 13.63 D(x): 0.87 D(G(z)): 0.57 / 0.10 | Acc 78.1 / 55.9\n[0/2][548/782] Loss_D: 1.78 Loss_G: 12.22 D(x): 0.70 D(G(z)): 0.27 / 0.30 | Acc 56.2 / 55.9\nsemi Test: [0/157]\t Prec@1 39.062 (39.062)\nsemi Test: [50/157]\t Prec@1 53.125 (48.223)\nsemi Test: [100/157]\t Prec@1 40.625 (48.236)\nsemi Test: [150/157]\t Prec@1 37.500 (48.200)\nsemi Test Prec@1 48.12\nsup Test: [0/157]\t Prec@1 48.438 (48.438)\nsup Test: [50/157]\t Prec@1 50.000 (48.407)\nsup Test: [100/157]\t Prec@1 43.750 (48.530)\nsup Test: [150/157]\t Prec@1 46.875 (48.168)\nsup Test Prec@1 48.09\n[0/2][549/782] Loss_D: 1.94 Loss_G: 13.06 D(x): 0.79 D(G(z)): 0.49 / 0.15 | Acc 73.4 / 55.9\n[0/2][550/782] Loss_D: 1.66 Loss_G: 12.54 D(x): 0.76 D(G(z)): 0.29 / 0.23 | Acc 73.4 / 56.0\n[0/2][551/782] Loss_D: 1.50 Loss_G: 13.03 D(x): 0.81 D(G(z)): 0.44 / 0.13 | Acc 87.5 / 56.0\n[0/2][552/782] Loss_D: 1.75 Loss_G: 12.01 D(x): 0.74 D(G(z)): 0.25 / 0.38 | Acc 53.1 / 56.0\n[0/2][553/782] Loss_D: 1.61 Loss_G: 14.17 D(x): 0.93 D(G(z)): 0.59 / 0.05 | Acc 84.4 / 56.1\n[0/2][554/782] Loss_D: 1.63 Loss_G: 11.61 D(x): 0.67 D(G(z)): 0.11 / 0.37 | Acc 62.5 / 56.1\n[0/2][555/782] Loss_D: 1.42 Loss_G: 13.47 D(x): 0.93 D(G(z)): 0.53 / 0.09 | Acc 92.2 / 56.1\n[0/2][556/782] Loss_D: 1.76 Loss_G: 12.04 D(x): 0.72 D(G(z)): 0.22 / 0.36 | Acc 57.8 / 56.1\n[0/2][557/782] Loss_D: 1.82 Loss_G: 13.32 D(x): 0.92 D(G(z)): 0.49 / 0.10 | Acc 78.1 / 56.2\n[0/2][558/782] Loss_D: 1.57 Loss_G: 12.26 D(x): 0.72 D(G(z)): 0.26 / 0.26 | Acc 65.6 / 56.2\nsemi Test: [0/157]\t Prec@1 48.438 (48.438)\nsemi Test: [50/157]\t Prec@1 45.312 (48.499)\nsemi Test: [100/157]\t Prec@1 42.188 (48.623)\nsemi Test: [150/157]\t Prec@1 46.875 (48.562)\nsemi Test Prec@1 48.39\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 46.875 (47.273)\nsup Test: [100/157]\t Prec@1 45.312 (47.695)\nsup Test: [150/157]\t Prec@1 45.312 (47.423)\nsup Test Prec@1 47.35\n[0/2][559/782] Loss_D: 1.67 Loss_G: 12.90 D(x): 0.87 D(G(z)): 0.42 / 0.15 | Acc 68.8 / 56.2\n[0/2][560/782] Loss_D: 1.62 Loss_G: 12.61 D(x): 0.77 D(G(z)): 0.31 / 0.20 | Acc 73.4 / 56.3\n[0/2][561/782] Loss_D: 1.63 Loss_G: 12.05 D(x): 0.80 D(G(z)): 0.32 / 0.28 | Acc 75.0 / 56.3\n[0/2][562/782] Loss_D: 1.49 Loss_G: 12.97 D(x): 0.85 D(G(z)): 0.43 / 0.11 | Acc 85.9 / 56.3\n[0/2][563/782] Loss_D: 1.49 Loss_G: 11.54 D(x): 0.67 D(G(z)): 0.20 / 0.47 | Acc 78.1 / 56.4\n[0/2][564/782] Loss_D: 2.03 Loss_G: 14.08 D(x): 0.94 D(G(z)): 0.63 / 0.05 | Acc 78.1 / 56.4\n[0/2][565/782] Loss_D: 2.01 Loss_G: 11.32 D(x): 0.47 D(G(z)): 0.10 / 0.61 | Acc 64.1 / 56.4\n[0/2][566/782] Loss_D: 2.51 Loss_G: 13.63 D(x): 0.98 D(G(z)): 0.75 / 0.08 | Acc 71.9 / 56.5\n[0/2][567/782] Loss_D: 1.70 Loss_G: 12.23 D(x): 0.60 D(G(z)): 0.17 / 0.28 | Acc 64.1 / 56.5\n[0/2][568/782] Loss_D: 1.56 Loss_G: 12.66 D(x): 0.88 D(G(z)): 0.44 / 0.15 | Acc 75.0 / 56.5\nsemi Test: [0/157]\t Prec@1 43.750 (43.750)\nsemi Test: [50/157]\t Prec@1 53.125 (49.479)\nsemi Test: [100/157]\t Prec@1 48.438 (49.830)\nsemi Test: [150/157]\t Prec@1 51.562 (49.741)\nsemi Test Prec@1 49.58\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 50.000 (47.518)\nsup Test: [100/157]\t Prec@1 43.750 (47.649)\nsup Test: [150/157]\t Prec@1 48.438 (47.310)\nsup Test Prec@1 47.17\n[0/2][569/782] Loss_D: 1.80 Loss_G: 12.23 D(x): 0.76 D(G(z)): 0.32 / 0.23 | Acc 59.4 / 56.5\n[0/2][570/782] Loss_D: 1.74 Loss_G: 12.13 D(x): 0.78 D(G(z)): 0.41 / 0.26 | Acc 75.0 / 56.5\n[0/2][571/782] Loss_D: 1.70 Loss_G: 12.57 D(x): 0.79 D(G(z)): 0.43 / 0.17 | Acc 71.9 / 56.6\n[0/2][572/782] Loss_D: 1.50 Loss_G: 12.15 D(x): 0.75 D(G(z)): 0.31 / 0.27 | Acc 76.6 / 56.6\n[0/2][573/782] Loss_D: 1.49 Loss_G: 12.57 D(x): 0.81 D(G(z)): 0.44 / 0.16 | Acc 89.1 / 56.7\n[0/2][574/782] Loss_D: 1.70 Loss_G: 12.09 D(x): 0.73 D(G(z)): 0.30 / 0.29 | Acc 68.8 / 56.7\n[0/2][575/782] Loss_D: 1.89 Loss_G: 12.90 D(x): 0.84 D(G(z)): 0.49 / 0.15 | Acc 73.4 / 56.7\n[0/2][576/782] Loss_D: 1.66 Loss_G: 11.92 D(x): 0.71 D(G(z)): 0.28 / 0.30 | Acc 68.8 / 56.7\n[0/2][577/782] Loss_D: 1.57 Loss_G: 13.33 D(x): 0.87 D(G(z)): 0.48 / 0.09 | Acc 81.2 / 56.8\n[0/2][578/782] Loss_D: 1.67 Loss_G: 11.27 D(x): 0.60 D(G(z)): 0.19 / 0.49 | Acc 70.3 / 56.8\nsemi Test: [0/157]\t Prec@1 46.875 (46.875)\nsemi Test: [50/157]\t Prec@1 50.000 (48.866)\nsemi Test: [100/157]\t Prec@1 45.312 (48.855)\nsemi Test: [150/157]\t Prec@1 40.625 (49.193)\nsemi Test Prec@1 48.95\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 46.875 (47.886)\nsup Test: [100/157]\t Prec@1 45.312 (48.004)\nsup Test: [150/157]\t Prec@1 46.875 (47.496)\nsup Test Prec@1 47.41\n[0/2][579/782] Loss_D: 1.94 Loss_G: 13.63 D(x): 0.88 D(G(z)): 0.64 / 0.07 | Acc 79.7 / 56.8\n[0/2][580/782] Loss_D: 1.99 Loss_G: 11.31 D(x): 0.65 D(G(z)): 0.15 / 0.52 | Acc 35.9 / 56.8\n[0/2][581/782] Loss_D: 1.96 Loss_G: 13.41 D(x): 0.92 D(G(z)): 0.67 / 0.08 | Acc 85.9 / 56.9\n[0/2][582/782] Loss_D: 1.64 Loss_G: 12.07 D(x): 0.65 D(G(z)): 0.19 / 0.24 | Acc 67.2 / 56.9\n[0/2][583/782] Loss_D: 1.65 Loss_G: 12.31 D(x): 0.81 D(G(z)): 0.42 / 0.23 | Acc 76.6 / 56.9\n[0/2][584/782] Loss_D: 1.69 Loss_G: 12.63 D(x): 0.77 D(G(z)): 0.40 / 0.18 | Acc 76.6 / 56.9\n[0/2][585/782] Loss_D: 1.71 Loss_G: 11.97 D(x): 0.74 D(G(z)): 0.36 / 0.26 | Acc 73.4 / 57.0\n[0/2][586/782] Loss_D: 1.46 Loss_G: 12.63 D(x): 0.82 D(G(z)): 0.42 / 0.16 | Acc 87.5 / 57.0\n[0/2][587/782] Loss_D: 1.58 Loss_G: 11.99 D(x): 0.70 D(G(z)): 0.32 / 0.29 | Acc 75.0 / 57.0\n[0/2][588/782] Loss_D: 1.69 Loss_G: 13.15 D(x): 0.82 D(G(z)): 0.49 / 0.09 | Acc 78.1 / 57.1\nsemi Test: [0/157]\t Prec@1 48.438 (48.438)\nsemi Test: [50/157]\t Prec@1 48.438 (49.295)\nsemi Test: [100/157]\t Prec@1 46.875 (49.613)\nsemi Test: [150/157]\t Prec@1 46.875 (49.803)\nsemi Test Prec@1 49.64\nsup Test: [0/157]\t Prec@1 43.750 (43.750)\nsup Test: [50/157]\t Prec@1 53.125 (48.009)\nsup Test: [100/157]\t Prec@1 40.625 (48.360)\nsup Test: [150/157]\t Prec@1 45.312 (47.837)\nsup Test Prec@1 47.71\n[0/2][589/782] Loss_D: 1.79 Loss_G: 11.28 D(x): 0.60 D(G(z)): 0.21 / 0.51 | Acc 67.2 / 57.1\n[0/2][590/782] Loss_D: 1.86 Loss_G: 13.61 D(x): 0.93 D(G(z)): 0.66 / 0.06 | Acc 90.6 / 57.2\n[0/2][591/782] Loss_D: 2.03 Loss_G: 10.96 D(x): 0.52 D(G(z)): 0.13 / 0.61 | Acc 57.8 / 57.2\n[0/2][592/782] Loss_D: 2.33 Loss_G: 13.10 D(x): 0.94 D(G(z)): 0.75 / 0.11 | Acc 85.9 / 57.2\n[0/2][593/782] Loss_D: 1.70 Loss_G: 11.93 D(x): 0.62 D(G(z)): 0.21 / 0.27 | Acc 67.2 / 57.2\n[0/2][594/782] Loss_D: 1.60 Loss_G: 11.86 D(x): 0.80 D(G(z)): 0.41 / 0.32 | Acc 75.0 / 57.3\n[0/2][595/782] Loss_D: 1.55 Loss_G: 12.68 D(x): 0.81 D(G(z)): 0.47 / 0.13 | Acc 84.4 / 57.3\n[0/2][596/782] Loss_D: 1.61 Loss_G: 11.55 D(x): 0.70 D(G(z)): 0.25 / 0.33 | Acc 73.4 / 57.3\n[0/2][597/782] Loss_D: 1.47 Loss_G: 12.09 D(x): 0.85 D(G(z)): 0.47 / 0.20 | Acc 93.8 / 57.4\n[0/2][598/782] Loss_D: 1.75 Loss_G: 12.13 D(x): 0.78 D(G(z)): 0.35 / 0.22 | Acc 71.9 / 57.4\nsemi Test: [0/157]\t Prec@1 40.625 (40.625)\nsemi Test: [50/157]\t Prec@1 51.562 (50.306)\nsemi Test: [100/157]\t Prec@1 45.312 (50.511)\nsemi Test: [150/157]\t Prec@1 45.312 (50.517)\nsemi Test Prec@1 50.40\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 51.562 (47.426)\nsup Test: [100/157]\t Prec@1 40.625 (47.850)\nsup Test: [150/157]\t Prec@1 48.438 (47.165)\nsup Test Prec@1 47.02\n[0/2][599/782] Loss_D: 1.60 Loss_G: 11.58 D(x): 0.73 D(G(z)): 0.35 / 0.33 | Acc 79.7 / 57.5\n[0/2][600/782] Loss_D: 1.61 Loss_G: 12.56 D(x): 0.84 D(G(z)): 0.47 / 0.13 | Acc 84.4 / 57.5\n[0/2][601/782] Loss_D: 1.74 Loss_G: 10.98 D(x): 0.61 D(G(z)): 0.26 / 0.48 | Acc 73.4 / 57.5\n[0/2][602/782] Loss_D: 1.94 Loss_G: 13.11 D(x): 0.94 D(G(z)): 0.60 / 0.09 | Acc 78.1 / 57.6\n[0/2][603/782] Loss_D: 1.99 Loss_G: 11.33 D(x): 0.58 D(G(z)): 0.18 / 0.44 | Acc 57.8 / 57.6\n[0/2][604/782] Loss_D: 1.90 Loss_G: 12.35 D(x): 0.85 D(G(z)): 0.59 / 0.15 | Acc 79.7 / 57.6\n[0/2][605/782] Loss_D: 1.52 Loss_G: 11.57 D(x): 0.72 D(G(z)): 0.28 / 0.30 | Acc 70.3 / 57.6\n[0/2][606/782] Loss_D: 1.70 Loss_G: 12.10 D(x): 0.81 D(G(z)): 0.48 / 0.21 | Acc 76.6 / 57.6\n[0/2][607/782] Loss_D: 1.53 Loss_G: 11.63 D(x): 0.77 D(G(z)): 0.36 / 0.27 | Acc 75.0 / 57.7\n[0/2][608/782] Loss_D: 1.42 Loss_G: 12.23 D(x): 0.81 D(G(z)): 0.43 / 0.19 | Acc 87.5 / 57.7\nsemi Test: [0/157]\t Prec@1 43.750 (43.750)\nsemi Test: [50/157]\t Prec@1 46.875 (49.020)\nsemi Test: [100/157]\t Prec@1 43.750 (49.041)\nsemi Test: [150/157]\t Prec@1 51.562 (49.369)\nsemi Test Prec@1 49.22\nsup Test: [0/157]\t Prec@1 43.750 (43.750)\nsup Test: [50/157]\t Prec@1 48.438 (47.978)\nsup Test: [100/157]\t Prec@1 40.625 (48.407)\nsup Test: [150/157]\t Prec@1 51.562 (47.868)\nsup Test Prec@1 47.74\n[0/2][609/782] Loss_D: 1.46 Loss_G: 11.77 D(x): 0.79 D(G(z)): 0.30 / 0.25 | Acc 68.8 / 57.7\n[0/2][610/782] Loss_D: 1.64 Loss_G: 12.17 D(x): 0.83 D(G(z)): 0.42 / 0.18 | Acc 78.1 / 57.8\n[0/2][611/782] Loss_D: 1.52 Loss_G: 11.72 D(x): 0.70 D(G(z)): 0.33 / 0.28 | Acc 78.1 / 57.8\n[0/2][612/782] Loss_D: 1.51 Loss_G: 12.25 D(x): 0.79 D(G(z)): 0.42 / 0.18 | Acc 82.8 / 57.8\n[0/2][613/782] Loss_D: 1.44 Loss_G: 12.17 D(x): 0.79 D(G(z)): 0.33 / 0.20 | Acc 79.7 / 57.9\n[0/2][614/782] Loss_D: 1.76 Loss_G: 11.41 D(x): 0.74 D(G(z)): 0.37 / 0.34 | Acc 68.8 / 57.9\n[0/2][615/782] Loss_D: 1.60 Loss_G: 12.90 D(x): 0.84 D(G(z)): 0.51 / 0.11 | Acc 84.4 / 57.9\n[0/2][616/782] Loss_D: 1.48 Loss_G: 12.07 D(x): 0.78 D(G(z)): 0.28 / 0.19 | Acc 70.3 / 58.0\n[0/2][617/782] Loss_D: 1.79 Loss_G: 11.46 D(x): 0.69 D(G(z)): 0.40 / 0.39 | Acc 76.6 / 58.0\n[0/2][618/782] Loss_D: 1.96 Loss_G: 12.94 D(x): 0.84 D(G(z)): 0.60 / 0.08 | Acc 79.7 / 58.0\nsemi Test: [0/157]\t Prec@1 54.688 (54.688)\nsemi Test: [50/157]\t Prec@1 39.062 (48.928)\nsemi Test: [100/157]\t Prec@1 42.188 (49.459)\nsemi Test: [150/157]\t Prec@1 48.438 (49.762)\nsemi Test Prec@1 49.67\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 50.000 (48.039)\nsup Test: [100/157]\t Prec@1 43.750 (48.391)\nsup Test: [150/157]\t Prec@1 50.000 (48.034)\nsup Test Prec@1 47.94\n[0/2][619/782] Loss_D: 2.17 Loss_G: 10.55 D(x): 0.55 D(G(z)): 0.20 / 0.71 | Acc 48.4 / 58.0\n[0/2][620/782] Loss_D: 2.54 Loss_G: 13.06 D(x): 0.96 D(G(z)): 0.80 / 0.08 | Acc 84.4 / 58.1\n[0/2][621/782] Loss_D: 1.74 Loss_G: 11.38 D(x): 0.59 D(G(z)): 0.18 / 0.37 | Acc 65.6 / 58.1\n[0/2][622/782] Loss_D: 1.81 Loss_G: 12.19 D(x): 0.87 D(G(z)): 0.58 / 0.18 | Acc 85.9 / 58.1\n[0/2][623/782] Loss_D: 1.52 Loss_G: 11.97 D(x): 0.71 D(G(z)): 0.35 / 0.25 | Acc 79.7 / 58.1\n[0/2][624/782] Loss_D: 1.84 Loss_G: 11.82 D(x): 0.75 D(G(z)): 0.44 / 0.27 | Acc 79.7 / 58.2\n[0/2][625/782] Loss_D: 1.59 Loss_G: 12.29 D(x): 0.77 D(G(z)): 0.46 / 0.17 | Acc 84.4 / 58.2\n[0/2][626/782] Loss_D: 1.73 Loss_G: 11.25 D(x): 0.64 D(G(z)): 0.34 / 0.39 | Acc 68.8 / 58.2\n[0/2][627/782] Loss_D: 2.04 Loss_G: 12.23 D(x): 0.80 D(G(z)): 0.56 / 0.18 | Acc 75.0 / 58.3\n[0/2][628/782] Loss_D: 1.64 Loss_G: 11.96 D(x): 0.73 D(G(z)): 0.34 / 0.19 | Acc 76.6 / 58.3\nsemi Test: [0/157]\t Prec@1 46.875 (46.875)\nsemi Test: [50/157]\t Prec@1 40.625 (48.805)\nsemi Test: [100/157]\t Prec@1 43.750 (49.288)\nsemi Test: [150/157]\t Prec@1 45.312 (49.307)\nsemi Test Prec@1 49.19\nsup Test: [0/157]\t Prec@1 45.312 (45.312)\nsup Test: [50/157]\t Prec@1 50.000 (48.039)\nsup Test: [100/157]\t Prec@1 45.312 (48.391)\nsup Test: [150/157]\t Prec@1 48.438 (47.972)\nsup Test Prec@1 47.85\n[0/2][629/782] Loss_D: 1.72 Loss_G: 11.02 D(x): 0.67 D(G(z)): 0.37 / 0.40 | Acc 76.6 / 58.3\n[0/2][630/782] Loss_D: 1.80 Loss_G: 12.63 D(x): 0.82 D(G(z)): 0.59 / 0.10 | Acc 84.4 / 58.4\n[0/2][631/782] Loss_D: 2.23 Loss_G: 10.53 D(x): 0.43 D(G(z)): 0.20 / 0.72 | Acc 71.9 / 58.4\n[0/2][632/782] Loss_D: 2.68 Loss_G: 12.92 D(x): 0.94 D(G(z)): 0.82 / 0.08 | Acc 84.4 / 58.4\n[0/2][633/782] Loss_D: 1.97 Loss_G: 11.06 D(x): 0.54 D(G(z)): 0.18 / 0.44 | Acc 59.4 / 58.4\n[0/2][634/782] Loss_D: 1.94 Loss_G: 11.94 D(x): 0.93 D(G(z)): 0.61 / 0.17 | Acc 81.2 / 58.5\n[0/2][635/782] Loss_D: 1.47 Loss_G: 11.76 D(x): 0.75 D(G(z)): 0.30 / 0.26 | Acc 76.6 / 58.5\n[0/2][636/782] Loss_D: 1.65 Loss_G: 11.46 D(x): 0.74 D(G(z)): 0.45 / 0.27 | Acc 84.4 / 58.5\n[0/2][637/782] Loss_D: 1.98 Loss_G: 11.55 D(x): 0.68 D(G(z)): 0.40 / 0.32 | Acc 68.8 / 58.6\n[0/2][638/782] Loss_D: 1.64 Loss_G: 11.79 D(x): 0.84 D(G(z)): 0.45 / 0.19 | Acc 79.7 / 58.6\nsemi Test: [0/157]\t Prec@1 46.875 (46.875)\nsemi Test: [50/157]\t Prec@1 43.750 (48.775)\nsemi Test: [100/157]\t Prec@1 46.875 (48.933)\nsemi Test: [150/157]\t Prec@1 43.750 (49.255)\nsemi Test Prec@1 49.16\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 51.562 (47.733)\nsup Test: [100/157]\t Prec@1 46.875 (48.267)\nsup Test: [150/157]\t Prec@1 51.562 (47.982)\nsup Test Prec@1 47.86\n[0/2][639/782] Loss_D: 1.63 Loss_G: 11.32 D(x): 0.67 D(G(z)): 0.31 / 0.35 | Acc 76.6 / 58.6\n[0/2][640/782] Loss_D: 1.61 Loss_G: 12.50 D(x): 0.87 D(G(z)): 0.51 / 0.12 | Acc 87.5 / 58.7\n[0/2][641/782] Loss_D: 1.89 Loss_G: 11.34 D(x): 0.70 D(G(z)): 0.24 / 0.33 | Acc 54.7 / 58.7\n[0/2][642/782] Loss_D: 1.74 Loss_G: 11.61 D(x): 0.80 D(G(z)): 0.51 / 0.27 | Acc 84.4 / 58.7\n[0/2][643/782] Loss_D: 1.43 Loss_G: 12.12 D(x): 0.82 D(G(z)): 0.42 / 0.14 | Acc 84.4 / 58.7\n[0/2][644/782] Loss_D: 1.57 Loss_G: 11.17 D(x): 0.67 D(G(z)): 0.28 / 0.34 | Acc 73.4 / 58.8\n[0/2][645/782] Loss_D: 1.64 Loss_G: 11.47 D(x): 0.78 D(G(z)): 0.50 / 0.27 | Acc 89.1 / 58.8\n[0/2][646/782] Loss_D: 1.67 Loss_G: 11.87 D(x): 0.73 D(G(z)): 0.42 / 0.24 | Acc 79.7 / 58.8\n[0/2][647/782] Loss_D: 1.59 Loss_G: 11.61 D(x): 0.76 D(G(z)): 0.38 / 0.22 | Acc 76.6 / 58.9\n[0/2][648/782] Loss_D: 1.64 Loss_G: 11.92 D(x): 0.78 D(G(z)): 0.41 / 0.17 | Acc 76.6 / 58.9\nsemi Test: [0/157]\t Prec@1 35.938 (35.938)\nsemi Test: [50/157]\t Prec@1 43.750 (48.866)\nsemi Test: [100/157]\t Prec@1 43.750 (49.196)\nsemi Test: [150/157]\t Prec@1 45.312 (49.389)\nsemi Test Prec@1 49.25\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 53.125 (48.192)\nsup Test: [100/157]\t Prec@1 43.750 (48.252)\nsup Test: [150/157]\t Prec@1 54.688 (47.837)\nsup Test Prec@1 47.69\n[0/2][649/782] Loss_D: 1.54 Loss_G: 10.91 D(x): 0.71 D(G(z)): 0.33 / 0.39 | Acc 79.7 / 58.9\n[0/2][650/782] Loss_D: 1.94 Loss_G: 11.61 D(x): 0.78 D(G(z)): 0.57 / 0.20 | Acc 84.4 / 59.0\n[0/2][651/782] Loss_D: 1.73 Loss_G: 10.98 D(x): 0.65 D(G(z)): 0.35 / 0.37 | Acc 75.0 / 59.0\n[0/2][652/782] Loss_D: 1.93 Loss_G: 12.10 D(x): 0.83 D(G(z)): 0.57 / 0.15 | Acc 84.4 / 59.0\n[0/2][653/782] Loss_D: 1.51 Loss_G: 11.17 D(x): 0.66 D(G(z)): 0.28 / 0.35 | Acc 75.0 / 59.0\n[0/2][654/782] Loss_D: 1.65 Loss_G: 11.47 D(x): 0.75 D(G(z)): 0.50 / 0.25 | Acc 82.8 / 59.1\n[0/2][655/782] Loss_D: 1.29 Loss_G: 12.11 D(x): 0.79 D(G(z)): 0.40 / 0.14 | Acc 90.6 / 59.1\n[0/2][656/782] Loss_D: 1.74 Loss_G: 10.86 D(x): 0.69 D(G(z)): 0.28 / 0.47 | Acc 62.5 / 59.1\n[0/2][657/782] Loss_D: 1.89 Loss_G: 12.34 D(x): 0.87 D(G(z)): 0.62 / 0.12 | Acc 85.9 / 59.2\n[0/2][658/782] Loss_D: 2.10 Loss_G: 10.36 D(x): 0.50 D(G(z)): 0.26 / 0.63 | Acc 62.5 / 59.2\nsemi Test: [0/157]\t Prec@1 45.312 (45.312)\nsemi Test: [50/157]\t Prec@1 50.000 (48.591)\nsemi Test: [100/157]\t Prec@1 42.188 (48.654)\nsemi Test: [150/157]\t Prec@1 43.750 (48.665)\nsemi Test Prec@1 48.44\nsup Test: [0/157]\t Prec@1 45.312 (45.312)\nsup Test: [50/157]\t Prec@1 51.562 (48.744)\nsup Test: [100/157]\t Prec@1 46.875 (48.855)\nsup Test: [150/157]\t Prec@1 56.250 (48.189)\nsup Test Prec@1 48.04\n[0/2][659/782] Loss_D: 2.25 Loss_G: 12.33 D(x): 0.93 D(G(z)): 0.73 / 0.12 | Acc 81.2 / 59.2\n[0/2][660/782] Loss_D: 2.16 Loss_G: 10.63 D(x): 0.52 D(G(z)): 0.24 / 0.56 | Acc 62.5 / 59.2\n[0/2][661/782] Loss_D: 2.09 Loss_G: 12.06 D(x): 0.90 D(G(z)): 0.69 / 0.15 | Acc 81.2 / 59.3\n[0/2][662/782] Loss_D: 1.93 Loss_G: 11.06 D(x): 0.60 D(G(z)): 0.25 / 0.37 | Acc 62.5 / 59.3\n[0/2][663/782] Loss_D: 1.46 Loss_G: 11.54 D(x): 0.86 D(G(z)): 0.50 / 0.22 | Acc 89.1 / 59.3\n[0/2][664/782] Loss_D: 1.46 Loss_G: 11.67 D(x): 0.76 D(G(z)): 0.35 / 0.21 | Acc 79.7 / 59.3\n[0/2][665/782] Loss_D: 1.81 Loss_G: 10.77 D(x): 0.71 D(G(z)): 0.35 / 0.38 | Acc 68.8 / 59.3\n[0/2][666/782] Loss_D: 1.70 Loss_G: 11.81 D(x): 0.88 D(G(z)): 0.54 / 0.16 | Acc 79.7 / 59.4\n[0/2][667/782] Loss_D: 1.58 Loss_G: 10.97 D(x): 0.68 D(G(z)): 0.29 / 0.37 | Acc 70.3 / 59.4\n[0/2][668/782] Loss_D: 1.74 Loss_G: 11.76 D(x): 0.85 D(G(z)): 0.54 / 0.18 | Acc 81.2 / 59.4\nsemi Test: [0/157]\t Prec@1 37.500 (37.500)\nsemi Test: [50/157]\t Prec@1 56.250 (48.652)\nsemi Test: [100/157]\t Prec@1 42.188 (49.010)\nsemi Test: [150/157]\t Prec@1 43.750 (49.058)\nsemi Test Prec@1 48.84\nsup Test: [0/157]\t Prec@1 43.750 (43.750)\nsup Test: [50/157]\t Prec@1 51.562 (48.039)\nsup Test: [100/157]\t Prec@1 45.312 (48.314)\nsup Test: [150/157]\t Prec@1 53.125 (48.013)\nsup Test Prec@1 47.96\n[0/2][669/782] Loss_D: 1.59 Loss_G: 10.72 D(x): 0.62 D(G(z)): 0.29 / 0.40 | Acc 82.8 / 59.5\n[0/2][670/782] Loss_D: 1.77 Loss_G: 11.85 D(x): 0.86 D(G(z)): 0.56 / 0.18 | Acc 81.2 / 59.5\n[0/2][671/782] Loss_D: 1.63 Loss_G: 11.26 D(x): 0.70 D(G(z)): 0.32 / 0.31 | Acc 75.0 / 59.5\n[0/2][672/782] Loss_D: 1.83 Loss_G: 10.95 D(x): 0.80 D(G(z)): 0.46 / 0.29 | Acc 67.2 / 59.5\n[0/2][673/782] Loss_D: 1.48 Loss_G: 11.73 D(x): 0.81 D(G(z)): 0.44 / 0.21 | Acc 84.4 / 59.6\n[0/2][674/782] Loss_D: 1.80 Loss_G: 10.80 D(x): 0.68 D(G(z)): 0.36 / 0.40 | Acc 70.3 / 59.6\n[0/2][675/782] Loss_D: 1.74 Loss_G: 11.65 D(x): 0.82 D(G(z)): 0.55 / 0.18 | Acc 85.9 / 59.6\n[0/2][676/782] Loss_D: 1.93 Loss_G: 10.67 D(x): 0.63 D(G(z)): 0.30 / 0.45 | Acc 65.6 / 59.6\n[0/2][677/782] Loss_D: 1.89 Loss_G: 11.80 D(x): 0.88 D(G(z)): 0.61 / 0.15 | Acc 82.8 / 59.7\n[0/2][678/782] Loss_D: 1.60 Loss_G: 11.23 D(x): 0.64 D(G(z)): 0.27 / 0.29 | Acc 75.0 / 59.7\nsemi Test: [0/157]\t Prec@1 37.500 (37.500)\nsemi Test: [50/157]\t Prec@1 60.938 (48.866)\nsemi Test: [100/157]\t Prec@1 40.625 (49.087)\nsemi Test: [150/157]\t Prec@1 46.875 (49.224)\nsemi Test Prec@1 48.92\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 53.125 (48.591)\nsup Test: [100/157]\t Prec@1 43.750 (49.010)\nsup Test: [150/157]\t Prec@1 53.125 (48.675)\nsup Test Prec@1 48.57\n[0/2][679/782] Loss_D: 1.59 Loss_G: 11.54 D(x): 0.82 D(G(z)): 0.47 / 0.21 | Acc 81.2 / 59.7\n[0/2][680/782] Loss_D: 1.55 Loss_G: 10.94 D(x): 0.74 D(G(z)): 0.37 / 0.28 | Acc 75.0 / 59.7\n[0/2][681/782] Loss_D: 1.64 Loss_G: 11.61 D(x): 0.78 D(G(z)): 0.43 / 0.19 | Acc 84.4 / 59.8\n[0/2][682/782] Loss_D: 1.61 Loss_G: 11.19 D(x): 0.74 D(G(z)): 0.33 / 0.28 | Acc 68.8 / 59.8\n[0/2][683/782] Loss_D: 1.52 Loss_G: 11.45 D(x): 0.79 D(G(z)): 0.43 / 0.23 | Acc 84.4 / 59.8\n[0/2][684/782] Loss_D: 1.69 Loss_G: 10.88 D(x): 0.72 D(G(z)): 0.38 / 0.36 | Acc 64.1 / 59.8\n[0/2][685/782] Loss_D: 1.53 Loss_G: 12.30 D(x): 0.87 D(G(z)): 0.52 / 0.11 | Acc 89.1 / 59.9\n[0/2][686/782] Loss_D: 2.08 Loss_G: 10.13 D(x): 0.51 D(G(z)): 0.22 / 0.76 | Acc 59.4 / 59.9\n[0/2][687/782] Loss_D: 2.63 Loss_G: 12.62 D(x): 0.95 D(G(z)): 0.82 / 0.09 | Acc 87.5 / 59.9\n[0/2][688/782] Loss_D: 2.20 Loss_G: 10.53 D(x): 0.50 D(G(z)): 0.18 / 0.48 | Acc 59.4 / 59.9\nsemi Test: [0/157]\t Prec@1 46.875 (46.875)\nsemi Test: [50/157]\t Prec@1 57.812 (48.192)\nsemi Test: [100/157]\t Prec@1 42.188 (48.670)\nsemi Test: [150/157]\t Prec@1 50.000 (48.634)\nsemi Test Prec@1 48.49\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 53.125 (48.254)\nsup Test: [100/157]\t Prec@1 45.312 (48.670)\nsup Test: [150/157]\t Prec@1 56.250 (48.189)\nsup Test Prec@1 48.06\n[0/2][689/782] Loss_D: 1.89 Loss_G: 11.82 D(x): 0.86 D(G(z)): 0.64 / 0.18 | Acc 84.4 / 60.0\n[0/2][690/782] Loss_D: 1.73 Loss_G: 11.17 D(x): 0.69 D(G(z)): 0.32 / 0.31 | Acc 71.9 / 60.0\n[0/2][691/782] Loss_D: 1.72 Loss_G: 11.25 D(x): 0.79 D(G(z)): 0.46 / 0.28 | Acc 78.1 / 60.0\n[0/2][692/782] Loss_D: 1.53 Loss_G: 11.51 D(x): 0.82 D(G(z)): 0.42 / 0.17 | Acc 78.1 / 60.0\n[0/2][693/782] Loss_D: 1.66 Loss_G: 10.49 D(x): 0.64 D(G(z)): 0.29 / 0.50 | Acc 75.0 / 60.0\n[0/2][694/782] Loss_D: 1.90 Loss_G: 11.85 D(x): 0.88 D(G(z)): 0.64 / 0.15 | Acc 82.8 / 60.1\n[0/2][695/782] Loss_D: 1.72 Loss_G: 10.38 D(x): 0.57 D(G(z)): 0.25 / 0.45 | Acc 67.2 / 60.1\n[0/2][696/782] Loss_D: 1.85 Loss_G: 11.20 D(x): 0.80 D(G(z)): 0.59 / 0.23 | Acc 81.2 / 60.1\n[0/2][697/782] Loss_D: 1.53 Loss_G: 11.39 D(x): 0.77 D(G(z)): 0.36 / 0.23 | Acc 76.6 / 60.1\n[0/2][698/782] Loss_D: 1.48 Loss_G: 10.90 D(x): 0.75 D(G(z)): 0.36 / 0.32 | Acc 78.1 / 60.2\nsemi Test: [0/157]\t Prec@1 40.625 (40.625)\nsemi Test: [50/157]\t Prec@1 57.812 (50.061)\nsemi Test: [100/157]\t Prec@1 43.750 (50.325)\nsemi Test: [150/157]\t Prec@1 40.625 (50.548)\nsemi Test Prec@1 50.41\nsup Test: [0/157]\t Prec@1 45.312 (45.312)\nsup Test: [50/157]\t Prec@1 51.562 (48.346)\nsup Test: [100/157]\t Prec@1 45.312 (48.855)\nsup Test: [150/157]\t Prec@1 54.688 (48.365)\nsup Test Prec@1 48.27\n[0/2][699/782] Loss_D: 1.68 Loss_G: 11.06 D(x): 0.72 D(G(z)): 0.44 / 0.27 | Acc 84.4 / 60.2\n[0/2][700/782] Loss_D: 1.49 Loss_G: 11.36 D(x): 0.75 D(G(z)): 0.43 / 0.20 | Acc 87.5 / 60.2\n[0/2][701/782] Loss_D: 1.54 Loss_G: 10.98 D(x): 0.74 D(G(z)): 0.32 / 0.31 | Acc 76.6 / 60.3\n[0/2][702/782] Loss_D: 1.37 Loss_G: 11.48 D(x): 0.82 D(G(z)): 0.45 / 0.20 | Acc 89.1 / 60.3\n[0/2][703/782] Loss_D: 1.61 Loss_G: 10.69 D(x): 0.71 D(G(z)): 0.33 / 0.37 | Acc 71.9 / 60.3\n[0/2][704/782] Loss_D: 1.72 Loss_G: 11.41 D(x): 0.80 D(G(z)): 0.52 / 0.20 | Acc 78.1 / 60.3\n[0/2][705/782] Loss_D: 1.66 Loss_G: 10.64 D(x): 0.67 D(G(z)): 0.32 / 0.38 | Acc 75.0 / 60.4\n[0/2][706/782] Loss_D: 1.70 Loss_G: 11.97 D(x): 0.84 D(G(z)): 0.54 / 0.14 | Acc 84.4 / 60.4\n[0/2][707/782] Loss_D: 1.62 Loss_G: 10.34 D(x): 0.62 D(G(z)): 0.25 / 0.47 | Acc 79.7 / 60.4\n[0/2][708/782] Loss_D: 1.78 Loss_G: 12.06 D(x): 0.90 D(G(z)): 0.62 / 0.11 | Acc 85.9 / 60.5\nsemi Test: [0/157]\t Prec@1 39.062 (39.062)\nsemi Test: [50/157]\t Prec@1 51.562 (48.805)\nsemi Test: [100/157]\t Prec@1 40.625 (49.381)\nsemi Test: [150/157]\t Prec@1 43.750 (49.503)\nsemi Test Prec@1 49.34\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 50.000 (48.407)\nsup Test: [100/157]\t Prec@1 46.875 (49.087)\nsup Test: [150/157]\t Prec@1 54.688 (48.707)\nsup Test Prec@1 48.56\n[0/2][709/782] Loss_D: 1.56 Loss_G: 10.56 D(x): 0.58 D(G(z)): 0.22 / 0.40 | Acc 78.1 / 60.5\n[0/2][710/782] Loss_D: 1.77 Loss_G: 11.32 D(x): 0.83 D(G(z)): 0.55 / 0.20 | Acc 84.4 / 60.5\n[0/2][711/782] Loss_D: 1.43 Loss_G: 11.67 D(x): 0.77 D(G(z)): 0.36 / 0.16 | Acc 82.8 / 60.6\n[0/2][712/782] Loss_D: 1.47 Loss_G: 10.70 D(x): 0.71 D(G(z)): 0.29 / 0.36 | Acc 75.0 / 60.6\n[0/2][713/782] Loss_D: 1.67 Loss_G: 11.95 D(x): 0.86 D(G(z)): 0.56 / 0.13 | Acc 85.9 / 60.6\n[0/2][714/782] Loss_D: 1.70 Loss_G: 10.37 D(x): 0.63 D(G(z)): 0.26 / 0.43 | Acc 71.9 / 60.6\n[0/2][715/782] Loss_D: 1.68 Loss_G: 11.84 D(x): 0.87 D(G(z)): 0.58 / 0.13 | Acc 85.9 / 60.7\n[0/2][716/782] Loss_D: 1.69 Loss_G: 10.62 D(x): 0.63 D(G(z)): 0.27 / 0.40 | Acc 70.3 / 60.7\n[0/2][717/782] Loss_D: 1.76 Loss_G: 11.88 D(x): 0.87 D(G(z)): 0.60 / 0.10 | Acc 89.1 / 60.7\n[0/2][718/782] Loss_D: 1.92 Loss_G: 9.92 D(x): 0.53 D(G(z)): 0.22 / 0.61 | Acc 65.6 / 60.7\nsemi Test: [0/157]\t Prec@1 43.750 (43.750)\nsemi Test: [50/157]\t Prec@1 51.562 (48.958)\nsemi Test: [100/157]\t Prec@1 54.688 (49.335)\nsemi Test: [150/157]\t Prec@1 45.312 (49.803)\nsemi Test Prec@1 49.63\nsup Test: [0/157]\t Prec@1 45.312 (45.312)\nsup Test: [50/157]\t Prec@1 53.125 (48.744)\nsup Test: [100/157]\t Prec@1 45.312 (48.963)\nsup Test: [150/157]\t Prec@1 51.562 (48.562)\nsup Test Prec@1 48.42\n[0/2][719/782] Loss_D: 2.24 Loss_G: 11.70 D(x): 0.92 D(G(z)): 0.72 / 0.14 | Acc 84.4 / 60.8\n[0/2][720/782] Loss_D: 1.81 Loss_G: 10.26 D(x): 0.58 D(G(z)): 0.27 / 0.44 | Acc 68.8 / 60.8\n[0/2][721/782] Loss_D: 1.59 Loss_G: 11.56 D(x): 0.87 D(G(z)): 0.58 / 0.13 | Acc 90.6 / 60.8\n[0/2][722/782] Loss_D: 1.44 Loss_G: 10.93 D(x): 0.70 D(G(z)): 0.22 / 0.29 | Acc 73.4 / 60.8\n[0/2][723/782] Loss_D: 1.50 Loss_G: 11.13 D(x): 0.84 D(G(z)): 0.47 / 0.22 | Acc 89.1 / 60.9\n[0/2][724/782] Loss_D: 1.74 Loss_G: 10.97 D(x): 0.70 D(G(z)): 0.40 / 0.28 | Acc 76.6 / 60.9\n[0/2][725/782] Loss_D: 1.64 Loss_G: 11.05 D(x): 0.75 D(G(z)): 0.46 / 0.24 | Acc 84.4 / 60.9\n[0/2][726/782] Loss_D: 1.71 Loss_G: 10.94 D(x): 0.71 D(G(z)): 0.43 / 0.28 | Acc 76.6 / 60.9\n[0/2][727/782] Loss_D: 1.76 Loss_G: 11.08 D(x): 0.75 D(G(z)): 0.44 / 0.23 | Acc 78.1 / 61.0\n[0/2][728/782] Loss_D: 1.80 Loss_G: 10.46 D(x): 0.66 D(G(z)): 0.39 / 0.37 | Acc 78.1 / 61.0\nsemi Test: [0/157]\t Prec@1 37.500 (37.500)\nsemi Test: [50/157]\t Prec@1 56.250 (48.897)\nsemi Test: [100/157]\t Prec@1 50.000 (49.691)\nsemi Test: [150/157]\t Prec@1 48.438 (49.876)\nsemi Test Prec@1 49.70\nsup Test: [0/157]\t Prec@1 43.750 (43.750)\nsup Test: [50/157]\t Prec@1 51.562 (49.081)\nsup Test: [100/157]\t Prec@1 45.312 (49.149)\nsup Test: [150/157]\t Prec@1 53.125 (48.644)\nsup Test Prec@1 48.53\n[0/2][729/782] Loss_D: 1.63 Loss_G: 11.26 D(x): 0.81 D(G(z)): 0.52 / 0.17 | Acc 89.1 / 61.0\n[0/2][730/782] Loss_D: 1.58 Loss_G: 10.62 D(x): 0.71 D(G(z)): 0.32 / 0.32 | Acc 76.6 / 61.0\n[0/2][731/782] Loss_D: 1.66 Loss_G: 10.93 D(x): 0.77 D(G(z)): 0.50 / 0.26 | Acc 85.9 / 61.1\n[0/2][732/782] Loss_D: 1.39 Loss_G: 10.92 D(x): 0.71 D(G(z)): 0.39 / 0.25 | Acc 93.8 / 61.1\n[0/2][733/782] Loss_D: 1.61 Loss_G: 10.98 D(x): 0.75 D(G(z)): 0.44 / 0.25 | Acc 79.7 / 61.1\n[0/2][734/782] Loss_D: 1.40 Loss_G: 11.11 D(x): 0.74 D(G(z)): 0.41 / 0.22 | Acc 89.1 / 61.2\n[0/2][735/782] Loss_D: 1.71 Loss_G: 10.59 D(x): 0.72 D(G(z)): 0.39 / 0.34 | Acc 71.9 / 61.2\n[0/2][736/782] Loss_D: 1.73 Loss_G: 11.19 D(x): 0.71 D(G(z)): 0.51 / 0.21 | Acc 89.1 / 61.2\n[0/2][737/782] Loss_D: 1.73 Loss_G: 10.52 D(x): 0.67 D(G(z)): 0.35 / 0.40 | Acc 79.7 / 61.3\n[0/2][738/782] Loss_D: 1.73 Loss_G: 11.52 D(x): 0.82 D(G(z)): 0.55 / 0.14 | Acc 87.5 / 61.3\nsemi Test: [0/157]\t Prec@1 39.062 (39.062)\nsemi Test: [50/157]\t Prec@1 53.125 (49.632)\nsemi Test: [100/157]\t Prec@1 51.562 (50.046)\nsemi Test: [150/157]\t Prec@1 46.875 (50.393)\nsemi Test Prec@1 50.14\nsup Test: [0/157]\t Prec@1 45.312 (45.312)\nsup Test: [50/157]\t Prec@1 54.688 (49.081)\nsup Test: [100/157]\t Prec@1 43.750 (49.056)\nsup Test: [150/157]\t Prec@1 54.688 (48.634)\nsup Test Prec@1 48.54\n[0/2][739/782] Loss_D: 1.55 Loss_G: 10.16 D(x): 0.66 D(G(z)): 0.23 / 0.44 | Acc 73.4 / 61.3\n[0/2][740/782] Loss_D: 1.70 Loss_G: 11.90 D(x): 0.88 D(G(z)): 0.58 / 0.11 | Acc 85.9 / 61.3\n[0/2][741/782] Loss_D: 1.55 Loss_G: 10.65 D(x): 0.65 D(G(z)): 0.22 / 0.33 | Acc 76.6 / 61.4\n[0/2][742/782] Loss_D: 1.62 Loss_G: 11.78 D(x): 0.88 D(G(z)): 0.53 / 0.12 | Acc 82.8 / 61.4\n[0/2][743/782] Loss_D: 1.52 Loss_G: 10.04 D(x): 0.62 D(G(z)): 0.24 / 0.48 | Acc 78.1 / 61.4\n[0/2][744/782] Loss_D: 2.06 Loss_G: 11.58 D(x): 0.85 D(G(z)): 0.67 / 0.12 | Acc 89.1 / 61.5\n[0/2][745/782] Loss_D: 1.81 Loss_G: 10.14 D(x): 0.57 D(G(z)): 0.24 / 0.56 | Acc 67.2 / 61.5\n[0/2][746/782] Loss_D: 1.84 Loss_G: 11.80 D(x): 0.93 D(G(z)): 0.67 / 0.11 | Acc 90.6 / 61.5\n[0/2][747/782] Loss_D: 1.77 Loss_G: 10.17 D(x): 0.61 D(G(z)): 0.23 / 0.45 | Acc 64.1 / 61.5\n[0/2][748/782] Loss_D: 1.72 Loss_G: 11.06 D(x): 0.85 D(G(z)): 0.59 / 0.19 | Acc 90.6 / 61.5\nsemi Test: [0/157]\t Prec@1 40.625 (40.625)\nsemi Test: [50/157]\t Prec@1 48.438 (49.050)\nsemi Test: [100/157]\t Prec@1 43.750 (49.443)\nsemi Test: [150/157]\t Prec@1 50.000 (49.669)\nsemi Test Prec@1 49.51\nsup Test: [0/157]\t Prec@1 45.312 (45.312)\nsup Test: [50/157]\t Prec@1 51.562 (48.775)\nsup Test: [100/157]\t Prec@1 43.750 (49.165)\nsup Test: [150/157]\t Prec@1 51.562 (48.634)\nsup Test Prec@1 48.53\n[0/2][749/782] Loss_D: 1.70 Loss_G: 10.70 D(x): 0.70 D(G(z)): 0.34 / 0.29 | Acc 75.0 / 61.6\n[0/2][750/782] Loss_D: 1.68 Loss_G: 10.84 D(x): 0.75 D(G(z)): 0.46 / 0.26 | Acc 81.2 / 61.6\n[0/2][751/782] Loss_D: 1.49 Loss_G: 11.04 D(x): 0.74 D(G(z)): 0.42 / 0.23 | Acc 85.9 / 61.6\n[0/2][752/782] Loss_D: 1.36 Loss_G: 11.00 D(x): 0.79 D(G(z)): 0.38 / 0.23 | Acc 84.4 / 61.7\n[0/2][753/782] Loss_D: 1.67 Loss_G: 10.13 D(x): 0.70 D(G(z)): 0.36 / 0.45 | Acc 76.6 / 61.7\n[0/2][754/782] Loss_D: 1.63 Loss_G: 11.57 D(x): 0.87 D(G(z)): 0.57 / 0.12 | Acc 92.2 / 61.7\n[0/2][755/782] Loss_D: 1.94 Loss_G: 9.75 D(x): 0.52 D(G(z)): 0.25 / 0.63 | Acc 65.6 / 61.7\n[0/2][756/782] Loss_D: 2.21 Loss_G: 11.65 D(x): 0.92 D(G(z)): 0.73 / 0.13 | Acc 84.4 / 61.7\n[0/2][757/782] Loss_D: 1.60 Loss_G: 10.49 D(x): 0.62 D(G(z)): 0.23 / 0.35 | Acc 73.4 / 61.8\n[0/2][758/782] Loss_D: 1.62 Loss_G: 10.76 D(x): 0.80 D(G(z)): 0.50 / 0.25 | Acc 87.5 / 61.8\nsemi Test: [0/157]\t Prec@1 45.312 (45.312)\nsemi Test: [50/157]\t Prec@1 45.312 (50.827)\nsemi Test: [100/157]\t Prec@1 50.000 (50.758)\nsemi Test: [150/157]\t Prec@1 46.875 (50.900)\nsemi Test Prec@1 50.64\nsup Test: [0/157]\t Prec@1 46.875 (46.875)\nsup Test: [50/157]\t Prec@1 48.438 (48.376)\nsup Test: [100/157]\t Prec@1 43.750 (48.468)\nsup Test: [150/157]\t Prec@1 51.562 (48.189)\nsup Test Prec@1 48.08\n[0/2][759/782] Loss_D: 1.55 Loss_G: 10.87 D(x): 0.74 D(G(z)): 0.42 / 0.24 | Acc 85.9 / 61.8\n[0/2][760/782] Loss_D: 1.61 Loss_G: 10.60 D(x): 0.66 D(G(z)): 0.38 / 0.29 | Acc 82.8 / 61.9\n[0/2][761/782] Loss_D: 1.55 Loss_G: 10.71 D(x): 0.72 D(G(z)): 0.44 / 0.28 | Acc 93.8 / 61.9\n[0/2][762/782] Loss_D: 1.45 Loss_G: 10.78 D(x): 0.76 D(G(z)): 0.42 / 0.23 | Acc 90.6 / 61.9\n[0/2][763/782] Loss_D: 1.48 Loss_G: 10.66 D(x): 0.79 D(G(z)): 0.40 / 0.25 | Acc 79.7 / 62.0\n[0/2][764/782] Loss_D: 1.45 Loss_G: 10.91 D(x): 0.77 D(G(z)): 0.39 / 0.22 | Acc 79.7 / 62.0\n[0/2][765/782] Loss_D: 1.60 Loss_G: 10.43 D(x): 0.73 D(G(z)): 0.39 / 0.33 | Acc 75.0 / 62.0\n[0/2][766/782] Loss_D: 1.61 Loss_G: 10.91 D(x): 0.77 D(G(z)): 0.47 / 0.23 | Acc 92.2 / 62.0\n[0/2][767/782] Loss_D: 1.70 Loss_G: 10.08 D(x): 0.66 D(G(z)): 0.35 / 0.47 | Acc 71.9 / 62.1\n[0/2][768/782] Loss_D: 1.66 Loss_G: 11.67 D(x): 0.87 D(G(z)): 0.61 / 0.11 | Acc 92.2 / 62.1\nsemi Test: [0/157]\t Prec@1 45.312 (45.312)\nsemi Test: [50/157]\t Prec@1 50.000 (51.409)\nsemi Test: [100/157]\t Prec@1 54.688 (51.330)\nsemi Test: [150/157]\t Prec@1 46.875 (51.107)\nsemi Test Prec@1 50.83\nsup Test: [0/157]\t Prec@1 50.000 (50.000)\nsup Test: [50/157]\t Prec@1 53.125 (49.050)\nsup Test: [100/157]\t Prec@1 43.750 (49.459)\nsup Test: [150/157]\t Prec@1 51.562 (48.965)\nsup Test Prec@1 48.87\n[0/2][769/782] Loss_D: 1.45 Loss_G: 10.25 D(x): 0.58 D(G(z)): 0.21 / 0.43 | Acc 85.9 / 62.1\n[0/2][770/782] Loss_D: 1.58 Loss_G: 11.05 D(x): 0.85 D(G(z)): 0.55 / 0.19 | Acc 90.6 / 62.2\n[0/2][771/782] Loss_D: 1.40 Loss_G: 10.66 D(x): 0.69 D(G(z)): 0.32 / 0.29 | Acc 85.9 / 62.2\n[0/2][772/782] Loss_D: 1.41 Loss_G: 10.88 D(x): 0.80 D(G(z)): 0.43 / 0.20 | Acc 89.1 / 62.2\n[0/2][773/782] Loss_D: 1.37 Loss_G: 10.68 D(x): 0.76 D(G(z)): 0.34 / 0.24 | Acc 85.9 / 62.3\n[0/2][774/782] Loss_D: 1.56 Loss_G: 10.22 D(x): 0.73 D(G(z)): 0.40 / 0.38 | Acc 85.9 / 62.3\n[0/2][775/782] Loss_D: 1.60 Loss_G: 11.11 D(x): 0.80 D(G(z)): 0.53 / 0.17 | Acc 90.6 / 62.3\n[0/2][776/782] Loss_D: 1.65 Loss_G: 9.81 D(x): 0.68 D(G(z)): 0.27 / 0.47 | Acc 68.8 / 62.3\n[0/2][777/782] Loss_D: 1.77 Loss_G: 11.30 D(x): 0.87 D(G(z)): 0.61 / 0.12 | Acc 84.4 / 62.4\n[0/2][778/782] Loss_D: 1.90 Loss_G: 9.74 D(x): 0.52 D(G(z)): 0.22 / 0.61 | Acc 64.1 / 62.4\nsemi Test: [0/157]\t Prec@1 39.062 (39.062)\nsemi Test: [50/157]\t Prec@1 51.562 (50.306)\nsemi Test: [100/157]\t Prec@1 45.312 (48.902)\nsemi Test: [150/157]\t Prec@1 48.438 (49.183)\nsemi Test Prec@1 48.96\nsup Test: [0/157]\t Prec@1 48.438 (48.438)\nsup Test: [50/157]\t Prec@1 50.000 (48.836)\nsup Test: [100/157]\t Prec@1 45.312 (49.165)\nsup Test: [150/157]\t Prec@1 54.688 (48.665)\nsup Test Prec@1 48.52\n[0/2][779/782] Loss_D: 1.78 Loss_G: 11.44 D(x): 0.92 D(G(z)): 0.70 / 0.12 | Acc 93.8 / 62.4\n[0/2][780/782] Loss_D: 1.66 Loss_G: 10.24 D(x): 0.62 D(G(z)): 0.22 / 0.39 | Acc 62.5 / 62.4\n[0/2][781/782] Loss_D: 1.62 Loss_G: 10.88 D(x): 0.81 D(G(z)): 0.53 / 0.21 | Acc 85.9 / 62.4\n[1/2][0/782] Loss_D: 1.49 Loss_G: 10.78 D(x): 0.78 D(G(z)): 0.37 / 0.24 | Acc 81.2 / 81.2\n[1/2][1/782] Loss_D: 1.59 Loss_G: 10.53 D(x): 0.71 D(G(z)): 0.37 / 0.30 | Acc 84.4 / 82.8\n[1/2][2/782] Loss_D: 1.55 Loss_G: 10.44 D(x): 0.68 D(G(z)): 0.45 / 0.30 | Acc 93.8 / 86.5\n[1/2][3/782] Loss_D: 1.44 Loss_G: 10.80 D(x): 0.76 D(G(z)): 0.44 / 0.23 | Acc 89.1 / 87.1\n[1/2][4/782] Loss_D: 1.38 Loss_G: 10.35 D(x): 0.67 D(G(z)): 0.34 / 0.34 | Acc 90.6 / 87.8\n[1/2][5/782] Loss_D: 1.50 Loss_G: 11.21 D(x): 0.82 D(G(z)): 0.50 / 0.15 | Acc 87.5 / 87.8\n[1/2][6/782] Loss_D: 1.46 Loss_G: 10.04 D(x): 0.69 D(G(z)): 0.28 / 0.38 | Acc 76.6 / 86.2\nsemi Test: [0/157]\t Prec@1 48.438 (48.438)\nsemi Test: [50/157]\t Prec@1 50.000 (50.858)\nsemi Test: [100/157]\t Prec@1 46.875 (51.037)\nsemi Test: [150/157]\t Prec@1 46.875 (50.797)\nsemi Test Prec@1 50.63\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 48.438 (48.039)\nsup Test: [100/157]\t Prec@1 43.750 (48.608)\nsup Test: [150/157]\t Prec@1 53.125 (48.148)\nsup Test Prec@1 47.98\n[1/2][7/782] Loss_D: 1.65 Loss_G: 10.92 D(x): 0.75 D(G(z)): 0.54 / 0.21 | Acc 93.8 / 87.1\n[1/2][8/782] Loss_D: 1.48 Loss_G: 10.34 D(x): 0.72 D(G(z)): 0.33 / 0.32 | Acc 79.7 / 86.3\n[1/2][9/782] Loss_D: 1.60 Loss_G: 10.78 D(x): 0.80 D(G(z)): 0.49 / 0.22 | Acc 85.9 / 86.2\n[1/2][10/782] Loss_D: 1.38 Loss_G: 10.79 D(x): 0.74 D(G(z)): 0.37 / 0.24 | Acc 84.4 / 86.1\n[1/2][11/782] Loss_D: 1.31 Loss_G: 10.80 D(x): 0.76 D(G(z)): 0.38 / 0.24 | Acc 87.5 / 86.2\n[1/2][12/782] Loss_D: 1.76 Loss_G: 10.15 D(x): 0.78 D(G(z)): 0.37 / 0.35 | Acc 67.2 / 84.7\n[1/2][13/782] Loss_D: 1.74 Loss_G: 11.02 D(x): 0.77 D(G(z)): 0.52 / 0.17 | Acc 82.8 / 84.6\n[1/2][14/782] Loss_D: 1.45 Loss_G: 10.35 D(x): 0.65 D(G(z)): 0.32 / 0.31 | Acc 87.5 / 84.8\n[1/2][15/782] Loss_D: 1.62 Loss_G: 10.95 D(x): 0.84 D(G(z)): 0.48 / 0.17 | Acc 82.8 / 84.7\n[1/2][16/782] Loss_D: 1.58 Loss_G: 10.21 D(x): 0.69 D(G(z)): 0.30 / 0.43 | Acc 75.0 / 84.1\nsemi Test: [0/157]\t Prec@1 40.625 (40.625)\nsemi Test: [50/157]\t Prec@1 51.562 (51.164)\nsemi Test: [100/157]\t Prec@1 46.875 (50.804)\nsemi Test: [150/157]\t Prec@1 50.000 (51.097)\nsemi Test Prec@1 50.89\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 50.000 (49.173)\nsup Test: [100/157]\t Prec@1 43.750 (49.412)\nsup Test: [150/157]\t Prec@1 54.688 (49.027)\nsup Test Prec@1 48.94\n[1/2][17/782] Loss_D: 1.66 Loss_G: 11.91 D(x): 0.87 D(G(z)): 0.61 / 0.08 | Acc 93.8 / 84.6\n[1/2][18/782] Loss_D: 1.96 Loss_G: 9.29 D(x): 0.49 D(G(z)): 0.16 / 0.69 | Acc 62.5 / 83.5\n[1/2][19/782] Loss_D: 2.42 Loss_G: 11.53 D(x): 0.95 D(G(z)): 0.79 / 0.09 | Acc 79.7 / 83.3\n[1/2][20/782] Loss_D: 1.53 Loss_G: 10.49 D(x): 0.62 D(G(z)): 0.21 / 0.31 | Acc 71.9 / 82.7\n[1/2][21/782] Loss_D: 1.48 Loss_G: 10.55 D(x): 0.78 D(G(z)): 0.47 / 0.26 | Acc 93.8 / 83.2\n[1/2][22/782] Loss_D: 1.50 Loss_G: 10.77 D(x): 0.75 D(G(z)): 0.44 / 0.22 | Acc 90.6 / 83.6\n[1/2][23/782] Loss_D: 1.58 Loss_G: 10.43 D(x): 0.72 D(G(z)): 0.37 / 0.28 | Acc 78.1 / 83.3\n[1/2][24/782] Loss_D: 1.56 Loss_G: 10.28 D(x): 0.78 D(G(z)): 0.46 / 0.26 | Acc 79.7 / 83.2\n[1/2][25/782] Loss_D: 1.54 Loss_G: 10.53 D(x): 0.72 D(G(z)): 0.42 / 0.26 | Acc 85.9 / 83.3\n[1/2][26/782] Loss_D: 1.52 Loss_G: 10.45 D(x): 0.73 D(G(z)): 0.42 / 0.31 | Acc 85.9 / 83.4\nsemi Test: [0/157]\t Prec@1 43.750 (43.750)\nsemi Test: [50/157]\t Prec@1 59.375 (52.298)\nsemi Test: [100/157]\t Prec@1 43.750 (51.887)\nsemi Test: [150/157]\t Prec@1 45.312 (51.583)\nsemi Test Prec@1 51.40\nsup Test: [0/157]\t Prec@1 45.312 (45.312)\nsup Test: [50/157]\t Prec@1 50.000 (48.744)\nsup Test: [100/157]\t Prec@1 43.750 (49.196)\nsup Test: [150/157]\t Prec@1 51.562 (48.872)\nsup Test Prec@1 48.75\n[1/2][27/782] Loss_D: 1.68 Loss_G: 10.14 D(x): 0.68 D(G(z)): 0.46 / 0.32 | Acc 87.5 / 83.5\n[1/2][28/782] Loss_D: 1.57 Loss_G: 10.93 D(x): 0.79 D(G(z)): 0.45 / 0.19 | Acc 79.7 / 83.4\n[1/2][29/782] Loss_D: 1.42 Loss_G: 10.34 D(x): 0.74 D(G(z)): 0.31 / 0.29 | Acc 78.1 / 83.2\n[1/2][30/782] Loss_D: 1.63 Loss_G: 10.50 D(x): 0.76 D(G(z)): 0.47 / 0.24 | Acc 82.8 / 83.2\n[1/2][31/782] Loss_D: 1.41 Loss_G: 10.71 D(x): 0.79 D(G(z)): 0.40 / 0.23 | Acc 87.5 / 83.3\n[1/2][32/782] Loss_D: 1.46 Loss_G: 10.68 D(x): 0.77 D(G(z)): 0.39 / 0.20 | Acc 79.7 / 83.2\n[1/2][33/782] Loss_D: 1.97 Loss_G: 9.54 D(x): 0.67 D(G(z)): 0.34 / 0.58 | Acc 60.9 / 82.6\n[1/2][34/782] Loss_D: 2.08 Loss_G: 11.65 D(x): 0.90 D(G(z)): 0.72 / 0.09 | Acc 89.1 / 82.8\n[1/2][35/782] Loss_D: 1.83 Loss_G: 9.62 D(x): 0.56 D(G(z)): 0.18 / 0.49 | Acc 62.5 / 82.2\n[1/2][36/782] Loss_D: 1.73 Loss_G: 10.85 D(x): 0.88 D(G(z)): 0.61 / 0.18 | Acc 87.5 / 82.3\nsemi Test: [0/157]\t Prec@1 39.062 (39.062)\nsemi Test: [50/157]\t Prec@1 54.688 (50.460)\nsemi Test: [100/157]\t Prec@1 51.562 (50.371)\nsemi Test: [150/157]\t Prec@1 42.188 (50.393)\nsemi Test Prec@1 50.14\nsup Test: [0/157]\t Prec@1 46.875 (46.875)\nsup Test: [50/157]\t Prec@1 51.562 (48.560)\nsup Test: [100/157]\t Prec@1 45.312 (48.731)\nsup Test: [150/157]\t Prec@1 53.125 (48.541)\nsup Test Prec@1 48.43\n[1/2][37/782] Loss_D: 1.40 Loss_G: 10.34 D(x): 0.67 D(G(z)): 0.30 / 0.28 | Acc 82.8 / 82.4\n[1/2][38/782] Loss_D: 1.46 Loss_G: 10.27 D(x): 0.78 D(G(z)): 0.41 / 0.29 | Acc 79.7 / 82.3\n[1/2][39/782] Loss_D: 1.68 Loss_G: 10.31 D(x): 0.74 D(G(z)): 0.46 / 0.28 | Acc 82.8 / 82.3\n[1/2][40/782] Loss_D: 1.49 Loss_G: 10.26 D(x): 0.72 D(G(z)): 0.39 / 0.27 | Acc 87.5 / 82.4\n[1/2][41/782] Loss_D: 1.55 Loss_G: 10.32 D(x): 0.73 D(G(z)): 0.44 / 0.27 | Acc 90.6 / 82.6\n[1/2][42/782] Loss_D: 1.45 Loss_G: 10.42 D(x): 0.74 D(G(z)): 0.40 / 0.30 | Acc 87.5 / 82.7\n[1/2][43/782] Loss_D: 1.36 Loss_G: 10.72 D(x): 0.81 D(G(z)): 0.42 / 0.18 | Acc 84.4 / 82.8\n[1/2][44/782] Loss_D: 1.39 Loss_G: 10.01 D(x): 0.71 D(G(z)): 0.30 / 0.33 | Acc 81.2 / 82.7\n[1/2][45/782] Loss_D: 1.47 Loss_G: 10.53 D(x): 0.82 D(G(z)): 0.47 / 0.21 | Acc 84.4 / 82.8\n[1/2][46/782] Loss_D: 1.27 Loss_G: 10.60 D(x): 0.76 D(G(z)): 0.34 / 0.20 | Acc 85.9 / 82.8\nsemi Test: [0/157]\t Prec@1 45.312 (45.312)\nsemi Test: [50/157]\t Prec@1 56.250 (50.582)\nsemi Test: [100/157]\t Prec@1 50.000 (50.418)\nsemi Test: [150/157]\t Prec@1 50.000 (50.528)\nsemi Test Prec@1 50.28\nsup Test: [0/157]\t Prec@1 43.750 (43.750)\nsup Test: [50/157]\t Prec@1 50.000 (48.805)\nsup Test: [100/157]\t Prec@1 42.188 (49.134)\nsup Test: [150/157]\t Prec@1 53.125 (48.655)\nsup Test Prec@1 48.57\n[1/2][47/782] Loss_D: 1.36 Loss_G: 9.97 D(x): 0.69 D(G(z)): 0.34 / 0.36 | Acc 85.9 / 82.9\n[1/2][48/782] Loss_D: 1.69 Loss_G: 11.18 D(x): 0.82 D(G(z)): 0.55 / 0.14 | Acc 85.9 / 83.0\n[1/2][49/782] Loss_D: 1.99 Loss_G: 9.14 D(x): 0.57 D(G(z)): 0.29 / 0.66 | Acc 65.6 / 82.6\n[1/2][50/782] Loss_D: 2.21 Loss_G: 11.38 D(x): 0.93 D(G(z)): 0.76 / 0.09 | Acc 90.6 / 82.8\n[1/2][51/782] Loss_D: 1.65 Loss_G: 9.83 D(x): 0.55 D(G(z)): 0.18 / 0.38 | Acc 79.7 / 82.7\n[1/2][52/782] Loss_D: 1.45 Loss_G: 11.18 D(x): 0.88 D(G(z)): 0.56 / 0.13 | Acc 95.3 / 83.0\n[1/2][53/782] Loss_D: 1.49 Loss_G: 9.58 D(x): 0.65 D(G(z)): 0.24 / 0.41 | Acc 79.7 / 82.9\n[1/2][54/782] Loss_D: 1.51 Loss_G: 10.96 D(x): 0.88 D(G(z)): 0.54 / 0.16 | Acc 90.6 / 83.0\n[1/2][55/782] Loss_D: 1.27 Loss_G: 11.08 D(x): 0.77 D(G(z)): 0.31 / 0.15 | Acc 81.2 / 83.0\n[1/2][56/782] Loss_D: 1.68 Loss_G: 9.45 D(x): 0.58 D(G(z)): 0.27 / 0.51 | Acc 76.6 / 82.9\nsemi Test: [0/157]\t Prec@1 42.188 (42.188)\nsemi Test: [50/157]\t Prec@1 48.438 (50.429)\nsemi Test: [100/157]\t Prec@1 40.625 (50.371)\nsemi Test: [150/157]\t Prec@1 40.625 (50.435)\nsemi Test Prec@1 50.14\nsup Test: [0/157]\t Prec@1 45.312 (45.312)\nsup Test: [50/157]\t Prec@1 48.438 (48.928)\nsup Test: [100/157]\t Prec@1 42.188 (49.134)\nsup Test: [150/157]\t Prec@1 53.125 (48.769)\nsup Test Prec@1 48.63\n[1/2][57/782] Loss_D: 1.68 Loss_G: 11.32 D(x): 0.90 D(G(z)): 0.64 / 0.11 | Acc 93.8 / 83.1\n[1/2][58/782] Loss_D: 1.71 Loss_G: 9.60 D(x): 0.54 D(G(z)): 0.24 / 0.45 | Acc 71.9 / 82.9\n[1/2][59/782] Loss_D: 1.79 Loss_G: 10.59 D(x): 0.86 D(G(z)): 0.58 / 0.19 | Acc 87.5 / 83.0\n[1/2][60/782] Loss_D: 1.57 Loss_G: 10.09 D(x): 0.61 D(G(z)): 0.34 / 0.32 | Acc 89.1 / 83.1\n[1/2][61/782] Loss_D: 1.58 Loss_G: 10.09 D(x): 0.74 D(G(z)): 0.47 / 0.30 | Acc 85.9 / 83.1\n[1/2][62/782] Loss_D: 1.56 Loss_G: 10.67 D(x): 0.78 D(G(z)): 0.48 / 0.17 | Acc 82.8 / 83.1\n[1/2][63/782] Loss_D: 1.62 Loss_G: 9.42 D(x): 0.63 D(G(z)): 0.27 / 0.50 | Acc 65.6 / 82.8\n[1/2][64/782] Loss_D: 1.61 Loss_G: 10.67 D(x): 0.87 D(G(z)): 0.61 / 0.17 | Acc 95.3 / 83.0\n[1/2][65/782] Loss_D: 1.60 Loss_G: 9.86 D(x): 0.67 D(G(z)): 0.30 / 0.33 | Acc 71.9 / 82.9\n[1/2][66/782] Loss_D: 1.66 Loss_G: 9.90 D(x): 0.72 D(G(z)): 0.47 / 0.33 | Acc 87.5 / 82.9\nsemi Test: [0/157]\t Prec@1 43.750 (43.750)\nsemi Test: [50/157]\t Prec@1 48.438 (49.571)\nsemi Test: [100/157]\t Prec@1 40.625 (49.613)\nsemi Test: [150/157]\t Prec@1 45.312 (49.897)\nsemi Test Prec@1 49.77\nsup Test: [0/157]\t Prec@1 45.312 (45.312)\nsup Test: [50/157]\t Prec@1 53.125 (49.050)\nsup Test: [100/157]\t Prec@1 43.750 (49.242)\nsup Test: [150/157]\t Prec@1 50.000 (48.841)\nsup Test Prec@1 48.74\n[1/2][67/782] Loss_D: 1.63 Loss_G: 10.40 D(x): 0.76 D(G(z)): 0.47 / 0.24 | Acc 84.4 / 83.0\n[1/2][68/782] Loss_D: 1.59 Loss_G: 10.23 D(x): 0.71 D(G(z)): 0.40 / 0.29 | Acc 82.8 / 82.9\n[1/2][69/782] Loss_D: 1.51 Loss_G: 10.33 D(x): 0.77 D(G(z)): 0.46 / 0.22 | Acc 82.8 / 82.9\n[1/2][70/782] Loss_D: 1.94 Loss_G: 9.30 D(x): 0.53 D(G(z)): 0.34 / 0.62 | Acc 79.7 / 82.9\n[1/2][71/782] Loss_D: 1.96 Loss_G: 11.19 D(x): 0.89 D(G(z)): 0.71 / 0.11 | Acc 89.1 / 83.0\n[1/2][72/782] Loss_D: 1.53 Loss_G: 10.05 D(x): 0.60 D(G(z)): 0.20 / 0.34 | Acc 76.6 / 82.9\n[1/2][73/782] Loss_D: 1.51 Loss_G: 10.03 D(x): 0.80 D(G(z)): 0.48 / 0.29 | Acc 92.2 / 83.0\n[1/2][74/782] Loss_D: 1.39 Loss_G: 10.93 D(x): 0.82 D(G(z)): 0.45 / 0.12 | Acc 87.5 / 83.1\n[1/2][75/782] Loss_D: 1.74 Loss_G: 9.35 D(x): 0.60 D(G(z)): 0.25 / 0.51 | Acc 65.6 / 82.9\n[1/2][76/782] Loss_D: 1.83 Loss_G: 10.58 D(x): 0.87 D(G(z)): 0.64 / 0.18 | Acc 89.1 / 82.9\nsemi Test: [0/157]\t Prec@1 48.438 (48.438)\nsemi Test: [50/157]\t Prec@1 51.562 (51.562)\nsemi Test: [100/157]\t Prec@1 45.312 (51.686)\nsemi Test: [150/157]\t Prec@1 54.688 (51.873)\nsemi Test Prec@1 51.73\nsup Test: [0/157]\t Prec@1 43.750 (43.750)\nsup Test: [50/157]\t Prec@1 54.688 (48.591)\nsup Test: [100/157]\t Prec@1 45.312 (49.226)\nsup Test: [150/157]\t Prec@1 53.125 (48.686)\nsup Test Prec@1 48.54\n[1/2][77/782] Loss_D: 1.62 Loss_G: 9.97 D(x): 0.66 D(G(z)): 0.31 / 0.31 | Acc 79.7 / 82.9\n[1/2][78/782] Loss_D: 1.40 Loss_G: 10.06 D(x): 0.77 D(G(z)): 0.42 / 0.29 | Acc 90.6 / 83.0\n[1/2][79/782] Loss_D: 1.50 Loss_G: 10.10 D(x): 0.73 D(G(z)): 0.45 / 0.26 | Acc 90.6 / 83.1\n[1/2][80/782] Loss_D: 1.54 Loss_G: 10.14 D(x): 0.77 D(G(z)): 0.38 / 0.27 | Acc 75.0 / 83.0\n[1/2][81/782] Loss_D: 1.55 Loss_G: 10.09 D(x): 0.74 D(G(z)): 0.42 / 0.28 | Acc 85.9 / 83.0\n[1/2][82/782] Loss_D: 1.47 Loss_G: 10.28 D(x): 0.78 D(G(z)): 0.41 / 0.26 | Acc 84.4 / 83.0\n[1/2][83/782] Loss_D: 1.22 Loss_G: 10.30 D(x): 0.80 D(G(z)): 0.37 / 0.19 | Acc 92.2 / 83.1\n[1/2][84/782] Loss_D: 1.43 Loss_G: 9.73 D(x): 0.71 D(G(z)): 0.33 / 0.37 | Acc 79.7 / 83.1\n[1/2][85/782] Loss_D: 1.48 Loss_G: 10.68 D(x): 0.83 D(G(z)): 0.49 / 0.18 | Acc 92.2 / 83.2\n[1/2][86/782] Loss_D: 1.44 Loss_G: 9.82 D(x): 0.70 D(G(z)): 0.33 / 0.35 | Acc 82.8 / 83.2\nsemi Test: [0/157]\t Prec@1 45.312 (45.312)\nsemi Test: [50/157]\t Prec@1 48.438 (50.888)\nsemi Test: [100/157]\t Prec@1 48.438 (51.501)\nsemi Test: [150/157]\t Prec@1 46.875 (51.645)\nsemi Test Prec@1 51.52\nsup Test: [0/157]\t Prec@1 46.875 (46.875)\nsup Test: [50/157]\t Prec@1 51.562 (48.713)\nsup Test: [100/157]\t Prec@1 45.312 (49.087)\nsup Test: [150/157]\t Prec@1 53.125 (48.748)\nsup Test Prec@1 48.63\n[1/2][87/782] Loss_D: 1.38 Loss_G: 10.46 D(x): 0.83 D(G(z)): 0.49 / 0.18 | Acc 93.8 / 83.3\n[1/2][88/782] Loss_D: 1.51 Loss_G: 9.71 D(x): 0.70 D(G(z)): 0.32 / 0.40 | Acc 75.0 / 83.2\n[1/2][89/782] Loss_D: 1.42 Loss_G: 10.96 D(x): 0.87 D(G(z)): 0.55 / 0.14 | Acc 92.2 / 83.3\n[1/2][90/782] Loss_D: 1.47 Loss_G: 9.26 D(x): 0.60 D(G(z)): 0.22 / 0.49 | Acc 84.4 / 83.3\n[1/2][91/782] Loss_D: 1.59 Loss_G: 10.99 D(x): 0.91 D(G(z)): 0.62 / 0.12 | Acc 96.9 / 83.5\n[1/2][92/782] Loss_D: 1.59 Loss_G: 9.35 D(x): 0.52 D(G(z)): 0.20 / 0.51 | Acc 82.8 / 83.5\n[1/2][93/782] Loss_D: 1.77 Loss_G: 10.72 D(x): 0.88 D(G(z)): 0.65 / 0.16 | Acc 90.6 / 83.6\n[1/2][94/782] Loss_D: 1.47 Loss_G: 9.75 D(x): 0.61 D(G(z)): 0.26 / 0.37 | Acc 79.7 / 83.5\n[1/2][95/782] Loss_D: 1.52 Loss_G: 10.36 D(x): 0.83 D(G(z)): 0.50 / 0.23 | Acc 89.1 / 83.6\n[1/2][96/782] Loss_D: 1.50 Loss_G: 10.05 D(x): 0.75 D(G(z)): 0.37 / 0.27 | Acc 78.1 / 83.5\nsemi Test: [0/157]\t Prec@1 48.438 (48.438)\nsemi Test: [50/157]\t Prec@1 50.000 (50.245)\nsemi Test: [100/157]\t Prec@1 45.312 (50.077)\nsemi Test: [150/157]\t Prec@1 54.688 (50.383)\nsemi Test Prec@1 50.25\nsup Test: [0/157]\t Prec@1 45.312 (45.312)\nsup Test: [50/157]\t Prec@1 54.688 (48.836)\nsup Test: [100/157]\t Prec@1 45.312 (49.056)\nsup Test: [150/157]\t Prec@1 53.125 (48.593)\nsup Test Prec@1 48.50\n[1/2][97/782] Loss_D: 1.68 Loss_G: 9.47 D(x): 0.66 D(G(z)): 0.39 / 0.42 | Acc 82.8 / 83.5\n[1/2][98/782] Loss_D: 1.67 Loss_G: 10.26 D(x): 0.78 D(G(z)): 0.54 / 0.21 | Acc 85.9 / 83.5\n[1/2][99/782] Loss_D: 1.54 Loss_G: 9.63 D(x): 0.67 D(G(z)): 0.34 / 0.39 | Acc 75.0 / 83.5\n[1/2][100/782] Loss_D: 1.63 Loss_G: 10.09 D(x): 0.81 D(G(z)): 0.48 / 0.23 | Acc 85.9 / 83.5\n[1/2][101/782] Loss_D: 1.48 Loss_G: 9.58 D(x): 0.64 D(G(z)): 0.36 / 0.34 | Acc 92.2 / 83.6\n[1/2][102/782] Loss_D: 1.38 Loss_G: 10.32 D(x): 0.81 D(G(z)): 0.45 / 0.21 | Acc 92.2 / 83.6\n[1/2][103/782] Loss_D: 1.34 Loss_G: 9.84 D(x): 0.75 D(G(z)): 0.33 / 0.30 | Acc 82.8 / 83.6\n[1/2][104/782] Loss_D: 1.52 Loss_G: 9.86 D(x): 0.74 D(G(z)): 0.40 / 0.35 | Acc 82.8 / 83.6\n[1/2][105/782] Loss_D: 1.58 Loss_G: 10.09 D(x): 0.78 D(G(z)): 0.48 / 0.26 | Acc 89.1 / 83.7\n[1/2][106/782] Loss_D: 1.49 Loss_G: 9.81 D(x): 0.71 D(G(z)): 0.41 / 0.33 | Acc 89.1 / 83.7\nsemi Test: [0/157]\t Prec@1 42.188 (42.188)\nsemi Test: [50/157]\t Prec@1 51.562 (50.490)\nsemi Test: [100/157]\t Prec@1 48.438 (50.727)\nsemi Test: [150/157]\t Prec@1 54.688 (50.786)\nsemi Test Prec@1 50.64\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 54.688 (48.805)\nsup Test: [100/157]\t Prec@1 43.750 (49.335)\nsup Test: [150/157]\t Prec@1 56.250 (48.758)\nsup Test Prec@1 48.65\n[1/2][107/782] Loss_D: 1.50 Loss_G: 9.89 D(x): 0.73 D(G(z)): 0.42 / 0.31 | Acc 90.6 / 83.8\n[1/2][108/782] Loss_D: 1.60 Loss_G: 9.88 D(x): 0.72 D(G(z)): 0.44 / 0.31 | Acc 89.1 / 83.8\n[1/2][109/782] Loss_D: 1.64 Loss_G: 10.02 D(x): 0.74 D(G(z)): 0.44 / 0.32 | Acc 79.7 / 83.8\n[1/2][110/782] Loss_D: 1.41 Loss_G: 10.39 D(x): 0.81 D(G(z)): 0.42 / 0.20 | Acc 85.9 / 83.8\n[1/2][111/782] Loss_D: 1.48 Loss_G: 9.33 D(x): 0.69 D(G(z)): 0.32 / 0.37 | Acc 85.9 / 83.8\n[1/2][112/782] Loss_D: 1.43 Loss_G: 10.67 D(x): 0.83 D(G(z)): 0.51 / 0.16 | Acc 93.8 / 83.9\n[1/2][113/782] Loss_D: 1.69 Loss_G: 9.03 D(x): 0.58 D(G(z)): 0.28 / 0.64 | Acc 82.8 / 83.9\n[1/2][114/782] Loss_D: 2.02 Loss_G: 11.38 D(x): 0.90 D(G(z)): 0.74 / 0.07 | Acc 89.1 / 84.0\n[1/2][115/782] Loss_D: 2.43 Loss_G: 8.77 D(x): 0.38 D(G(z)): 0.13 / 0.78 | Acc 60.9 / 83.8\n[1/2][116/782] Loss_D: 2.58 Loss_G: 10.26 D(x): 0.95 D(G(z)): 0.83 / 0.22 | Acc 93.8 / 83.9\nsemi Test: [0/157]\t Prec@1 40.625 (40.625)\nsemi Test: [50/157]\t Prec@1 43.750 (50.858)\nsemi Test: [100/157]\t Prec@1 48.438 (51.021)\nsemi Test: [150/157]\t Prec@1 46.875 (50.890)\nsemi Test Prec@1 50.74\nsup Test: [0/157]\t Prec@1 43.750 (43.750)\nsup Test: [50/157]\t Prec@1 53.125 (48.744)\nsup Test: [100/157]\t Prec@1 45.312 (49.149)\nsup Test: [150/157]\t Prec@1 51.562 (48.748)\nsup Test Prec@1 48.66\n[1/2][117/782] Loss_D: 1.38 Loss_G: 10.84 D(x): 0.78 D(G(z)): 0.34 / 0.13 | Acc 82.8 / 83.8\n[1/2][118/782] Loss_D: 2.01 Loss_G: 9.00 D(x): 0.48 D(G(z)): 0.23 / 0.66 | Acc 70.3 / 83.7\n[1/2][119/782] Loss_D: 2.25 Loss_G: 10.07 D(x): 0.92 D(G(z)): 0.75 / 0.25 | Acc 87.5 / 83.8\n[1/2][120/782] Loss_D: 1.89 Loss_G: 9.73 D(x): 0.63 D(G(z)): 0.37 / 0.33 | Acc 70.3 / 83.7\n[1/2][121/782] Loss_D: 1.53 Loss_G: 9.54 D(x): 0.73 D(G(z)): 0.43 / 0.35 | Acc 87.5 / 83.7\n[1/2][122/782] Loss_D: 1.55 Loss_G: 9.68 D(x): 0.74 D(G(z)): 0.47 / 0.30 | Acc 87.5 / 83.7\n[1/2][123/782] Loss_D: 1.49 Loss_G: 9.78 D(x): 0.70 D(G(z)): 0.43 / 0.31 | Acc 85.9 / 83.7\n[1/2][124/782] Loss_D: 1.51 Loss_G: 9.65 D(x): 0.76 D(G(z)): 0.43 / 0.31 | Acc 82.8 / 83.7\n[1/2][125/782] Loss_D: 1.78 Loss_G: 9.20 D(x): 0.64 D(G(z)): 0.42 / 0.49 | Acc 79.7 / 83.7\n[1/2][126/782] Loss_D: 1.59 Loss_G: 10.45 D(x): 0.85 D(G(z)): 0.58 / 0.17 | Acc 89.1 / 83.7\nsemi Test: [0/157]\t Prec@1 40.625 (40.625)\nsemi Test: [50/157]\t Prec@1 48.438 (48.744)\nsemi Test: [100/157]\t Prec@1 46.875 (48.762)\nsemi Test: [150/157]\t Prec@1 43.750 (49.058)\nsemi Test Prec@1 48.93\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 56.250 (48.254)\nsup Test: [100/157]\t Prec@1 45.312 (48.948)\nsup Test: [150/157]\t Prec@1 53.125 (48.551)\nsup Test Prec@1 48.42\n[1/2][127/782] Loss_D: 1.78 Loss_G: 8.86 D(x): 0.53 D(G(z)): 0.24 / 0.54 | Acc 75.0 / 83.7\n[1/2][128/782] Loss_D: 1.79 Loss_G: 10.11 D(x): 0.87 D(G(z)): 0.65 / 0.20 | Acc 92.2 / 83.7\n[1/2][129/782] Loss_D: 1.54 Loss_G: 9.54 D(x): 0.64 D(G(z)): 0.30 / 0.40 | Acc 70.3 / 83.6\n[1/2][130/782] Loss_D: 1.44 Loss_G: 9.87 D(x): 0.80 D(G(z)): 0.49 / 0.30 | Acc 90.6 / 83.7\n[1/2][131/782] Loss_D: 1.59 Loss_G: 9.75 D(x): 0.68 D(G(z)): 0.40 / 0.35 | Acc 87.5 / 83.7\n[1/2][132/782] Loss_D: 1.69 Loss_G: 9.55 D(x): 0.75 D(G(z)): 0.48 / 0.34 | Acc 85.9 / 83.7\n[1/2][133/782] Loss_D: 1.54 Loss_G: 9.58 D(x): 0.65 D(G(z)): 0.42 / 0.34 | Acc 92.2 / 83.8\n[1/2][134/782] Loss_D: 1.67 Loss_G: 9.64 D(x): 0.73 D(G(z)): 0.48 / 0.31 | Acc 84.4 / 83.8\n[1/2][135/782] Loss_D: 1.49 Loss_G: 9.61 D(x): 0.66 D(G(z)): 0.42 / 0.33 | Acc 95.3 / 83.9\n[1/2][136/782] Loss_D: 1.60 Loss_G: 9.55 D(x): 0.74 D(G(z)): 0.43 / 0.37 | Acc 79.7 / 83.9\nsemi Test: [0/157]\t Prec@1 45.312 (45.312)\nsemi Test: [50/157]\t Prec@1 48.438 (49.510)\nsemi Test: [100/157]\t Prec@1 50.000 (49.752)\nsemi Test: [150/157]\t Prec@1 45.312 (50.052)\nsemi Test Prec@1 49.80\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 56.250 (48.407)\nsup Test: [100/157]\t Prec@1 45.312 (48.840)\nsup Test: [150/157]\t Prec@1 48.438 (48.406)\nsup Test Prec@1 48.30\n[1/2][137/782] Loss_D: 1.50 Loss_G: 10.11 D(x): 0.79 D(G(z)): 0.49 / 0.22 | Acc 89.1 / 83.9\n[1/2][138/782] Loss_D: 1.42 Loss_G: 9.48 D(x): 0.71 D(G(z)): 0.32 / 0.39 | Acc 78.1 / 83.8\n[1/2][139/782] Loss_D: 1.65 Loss_G: 10.01 D(x): 0.80 D(G(z)): 0.50 / 0.25 | Acc 85.9 / 83.9\n[1/2][140/782] Loss_D: 1.53 Loss_G: 9.36 D(x): 0.67 D(G(z)): 0.35 / 0.43 | Acc 82.8 / 83.9\n[1/2][141/782] Loss_D: 1.62 Loss_G: 10.03 D(x): 0.80 D(G(z)): 0.55 / 0.22 | Acc 90.6 / 83.9\n[1/2][142/782] Loss_D: 1.52 Loss_G: 9.41 D(x): 0.73 D(G(z)): 0.32 / 0.40 | Acc 65.6 / 83.8\n[1/2][143/782] Loss_D: 1.47 Loss_G: 9.84 D(x): 0.76 D(G(z)): 0.49 / 0.27 | Acc 89.1 / 83.8\n[1/2][144/782] Loss_D: 1.49 Loss_G: 9.54 D(x): 0.68 D(G(z)): 0.37 / 0.34 | Acc 85.9 / 83.8\n[1/2][145/782] Loss_D: 1.60 Loss_G: 9.65 D(x): 0.72 D(G(z)): 0.44 / 0.35 | Acc 81.2 / 83.8\n[1/2][146/782] Loss_D: 1.47 Loss_G: 9.79 D(x): 0.74 D(G(z)): 0.46 / 0.29 | Acc 89.1 / 83.8\nsemi Test: [0/157]\t Prec@1 46.875 (46.875)\nsemi Test: [50/157]\t Prec@1 51.562 (50.061)\nsemi Test: [100/157]\t Prec@1 48.438 (50.248)\nsemi Test: [150/157]\t Prec@1 46.875 (50.486)\nsemi Test Prec@1 50.37\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 54.688 (48.070)\nsup Test: [100/157]\t Prec@1 42.188 (48.298)\nsup Test: [150/157]\t Prec@1 48.438 (47.848)\nsup Test Prec@1 47.74\n[1/2][147/782] Loss_D: 1.71 Loss_G: 9.52 D(x): 0.68 D(G(z)): 0.42 / 0.38 | Acc 76.6 / 83.8\n[1/2][148/782] Loss_D: 1.67 Loss_G: 9.52 D(x): 0.70 D(G(z)): 0.48 / 0.33 | Acc 87.5 / 83.8\n[1/2][149/782] Loss_D: 1.69 Loss_G: 9.79 D(x): 0.72 D(G(z)): 0.43 / 0.30 | Acc 81.2 / 83.8\n[1/2][150/782] Loss_D: 1.58 Loss_G: 9.47 D(x): 0.67 D(G(z)): 0.40 / 0.38 | Acc 84.4 / 83.8\n[1/2][151/782] Loss_D: 1.71 Loss_G: 9.65 D(x): 0.70 D(G(z)): 0.50 / 0.30 | Acc 87.5 / 83.8\n[1/2][152/782] Loss_D: 1.23 Loss_G: 10.06 D(x): 0.75 D(G(z)): 0.39 / 0.23 | Acc 98.4 / 83.9\n[1/2][153/782] Loss_D: 1.46 Loss_G: 9.27 D(x): 0.64 D(G(z)): 0.34 / 0.47 | Acc 89.1 / 84.0\n[1/2][154/782] Loss_D: 1.63 Loss_G: 10.74 D(x): 0.86 D(G(z)): 0.58 / 0.13 | Acc 90.6 / 84.0\n[1/2][155/782] Loss_D: 1.98 Loss_G: 8.41 D(x): 0.51 D(G(z)): 0.21 / 0.79 | Acc 54.7 / 83.8\n[1/2][156/782] Loss_D: 2.45 Loss_G: 10.23 D(x): 0.94 D(G(z)): 0.83 / 0.17 | Acc 95.3 / 83.9\nsemi Test: [0/157]\t Prec@1 53.125 (53.125)\nsemi Test: [50/157]\t Prec@1 56.250 (50.858)\nsemi Test: [100/157]\t Prec@1 43.750 (50.897)\nsemi Test: [150/157]\t Prec@1 53.125 (51.149)\nsemi Test Prec@1 51.01\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 54.688 (49.112)\nsup Test: [100/157]\t Prec@1 45.312 (49.149)\nsup Test: [150/157]\t Prec@1 51.562 (48.603)\nsup Test Prec@1 48.50\n[1/2][157/782] Loss_D: 1.65 Loss_G: 9.26 D(x): 0.54 D(G(z)): 0.28 / 0.36 | Acc 81.2 / 83.9\n[1/2][158/782] Loss_D: 1.41 Loss_G: 9.57 D(x): 0.77 D(G(z)): 0.47 / 0.31 | Acc 93.8 / 83.9\n[1/2][159/782] Loss_D: 1.49 Loss_G: 9.42 D(x): 0.72 D(G(z)): 0.41 / 0.35 | Acc 87.5 / 84.0\n[1/2][160/782] Loss_D: 1.44 Loss_G: 9.65 D(x): 0.69 D(G(z)): 0.43 / 0.32 | Acc 93.8 / 84.0\n[1/2][161/782] Loss_D: 1.51 Loss_G: 9.74 D(x): 0.72 D(G(z)): 0.44 / 0.28 | Acc 87.5 / 84.0\n[1/2][162/782] Loss_D: 1.39 Loss_G: 9.56 D(x): 0.74 D(G(z)): 0.35 / 0.34 | Acc 82.8 / 84.0\n[1/2][163/782] Loss_D: 1.39 Loss_G: 9.86 D(x): 0.79 D(G(z)): 0.45 / 0.23 | Acc 90.6 / 84.1\n[1/2][164/782] Loss_D: 1.34 Loss_G: 9.65 D(x): 0.74 D(G(z)): 0.35 / 0.29 | Acc 87.5 / 84.1\n[1/2][165/782] Loss_D: 1.42 Loss_G: 9.34 D(x): 0.74 D(G(z)): 0.39 / 0.36 | Acc 85.9 / 84.1\n[1/2][166/782] Loss_D: 1.60 Loss_G: 9.84 D(x): 0.73 D(G(z)): 0.49 / 0.25 | Acc 92.2 / 84.2\nsemi Test: [0/157]\t Prec@1 51.562 (51.562)\nsemi Test: [50/157]\t Prec@1 48.438 (51.195)\nsemi Test: [100/157]\t Prec@1 42.188 (51.671)\nsemi Test: [150/157]\t Prec@1 54.688 (51.904)\nsemi Test Prec@1 51.72\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 54.688 (48.713)\nsup Test: [100/157]\t Prec@1 46.875 (49.242)\nsup Test: [150/157]\t Prec@1 50.000 (48.624)\nsup Test Prec@1 48.54\n[1/2][167/782] Loss_D: 1.55 Loss_G: 9.04 D(x): 0.69 D(G(z)): 0.35 / 0.46 | Acc 76.6 / 84.1\n[1/2][168/782] Loss_D: 1.67 Loss_G: 10.26 D(x): 0.81 D(G(z)): 0.59 / 0.18 | Acc 87.5 / 84.1\n[1/2][169/782] Loss_D: 1.71 Loss_G: 8.81 D(x): 0.53 D(G(z)): 0.31 / 0.55 | Acc 79.7 / 84.1\n[1/2][170/782] Loss_D: 1.58 Loss_G: 10.34 D(x): 0.87 D(G(z)): 0.63 / 0.13 | Acc 95.3 / 84.2\n[1/2][171/782] Loss_D: 1.78 Loss_G: 8.67 D(x): 0.53 D(G(z)): 0.21 / 0.64 | Acc 71.9 / 84.1\n[1/2][172/782] Loss_D: 2.06 Loss_G: 10.72 D(x): 0.94 D(G(z)): 0.74 / 0.11 | Acc 93.8 / 84.1\n[1/2][173/782] Loss_D: 2.01 Loss_G: 8.43 D(x): 0.52 D(G(z)): 0.22 / 0.72 | Acc 51.6 / 84.0\n[1/2][174/782] Loss_D: 2.33 Loss_G: 9.96 D(x): 0.89 D(G(z)): 0.79 / 0.21 | Acc 92.2 / 84.0\n[1/2][175/782] Loss_D: 1.63 Loss_G: 9.42 D(x): 0.63 D(G(z)): 0.32 / 0.33 | Acc 76.6 / 84.0\n[1/2][176/782] Loss_D: 1.58 Loss_G: 9.41 D(x): 0.72 D(G(z)): 0.46 / 0.37 | Acc 89.1 / 84.0\nsemi Test: [0/157]\t Prec@1 50.000 (50.000)\nsemi Test: [50/157]\t Prec@1 46.875 (50.950)\nsemi Test: [100/157]\t Prec@1 37.500 (50.681)\nsemi Test: [150/157]\t Prec@1 46.875 (51.200)\nsemi Test Prec@1 51.05\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 53.125 (48.376)\nsup Test: [100/157]\t Prec@1 45.312 (48.793)\nsup Test: [150/157]\t Prec@1 46.875 (48.251)\nsup Test Prec@1 48.19\n[1/2][177/782] Loss_D: 1.62 Loss_G: 9.63 D(x): 0.75 D(G(z)): 0.51 / 0.28 | Acc 90.6 / 84.0\n[1/2][178/782] Loss_D: 1.65 Loss_G: 9.01 D(x): 0.68 D(G(z)): 0.39 / 0.44 | Acc 76.6 / 84.0\n[1/2][179/782] Loss_D: 1.66 Loss_G: 9.68 D(x): 0.78 D(G(z)): 0.56 / 0.25 | Acc 95.3 / 84.1\n[1/2][180/782] Loss_D: 1.46 Loss_G: 9.34 D(x): 0.63 D(G(z)): 0.34 / 0.36 | Acc 89.1 / 84.1\n[1/2][181/782] Loss_D: 1.73 Loss_G: 9.48 D(x): 0.71 D(G(z)): 0.51 / 0.31 | Acc 90.6 / 84.1\n[1/2][182/782] Loss_D: 1.34 Loss_G: 9.85 D(x): 0.77 D(G(z)): 0.42 / 0.22 | Acc 89.1 / 84.1\n[1/2][183/782] Loss_D: 1.50 Loss_G: 9.22 D(x): 0.67 D(G(z)): 0.34 / 0.41 | Acc 81.2 / 84.1\n[1/2][184/782] Loss_D: 1.76 Loss_G: 9.29 D(x): 0.74 D(G(z)): 0.50 / 0.35 | Acc 82.8 / 84.1\n[1/2][185/782] Loss_D: 1.44 Loss_G: 9.54 D(x): 0.69 D(G(z)): 0.45 / 0.26 | Acc 95.3 / 84.2\n[1/2][186/782] Loss_D: 1.44 Loss_G: 9.32 D(x): 0.69 D(G(z)): 0.38 / 0.32 | Acc 84.4 / 84.2\nsemi Test: [0/157]\t Prec@1 42.188 (42.188)\nsemi Test: [50/157]\t Prec@1 43.750 (50.705)\nsemi Test: [100/157]\t Prec@1 46.875 (50.572)\nsemi Test: [150/157]\t Prec@1 59.375 (51.231)\nsemi Test Prec@1 51.01\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 54.688 (48.866)\nsup Test: [100/157]\t Prec@1 46.875 (49.087)\nsup Test: [150/157]\t Prec@1 51.562 (48.458)\nsup Test Prec@1 48.37\n[1/2][187/782] Loss_D: 1.37 Loss_G: 9.73 D(x): 0.77 D(G(z)): 0.43 / 0.22 | Acc 87.5 / 84.2\n[1/2][188/782] Loss_D: 1.51 Loss_G: 9.13 D(x): 0.69 D(G(z)): 0.37 / 0.39 | Acc 87.5 / 84.2\n[1/2][189/782] Loss_D: 1.56 Loss_G: 9.45 D(x): 0.73 D(G(z)): 0.51 / 0.29 | Acc 92.2 / 84.3\n[1/2][190/782] Loss_D: 1.65 Loss_G: 9.37 D(x): 0.67 D(G(z)): 0.45 / 0.33 | Acc 84.4 / 84.3\n[1/2][191/782] Loss_D: 1.56 Loss_G: 9.64 D(x): 0.68 D(G(z)): 0.48 / 0.27 | Acc 95.3 / 84.3\n[1/2][192/782] Loss_D: 1.61 Loss_G: 8.82 D(x): 0.61 D(G(z)): 0.39 / 0.49 | Acc 84.4 / 84.3\n[1/2][193/782] Loss_D: 1.60 Loss_G: 10.53 D(x): 0.83 D(G(z)): 0.62 / 0.12 | Acc 95.3 / 84.4\n[1/2][194/782] Loss_D: 1.56 Loss_G: 8.54 D(x): 0.53 D(G(z)): 0.21 / 0.56 | Acc 79.7 / 84.4\n[1/2][195/782] Loss_D: 1.87 Loss_G: 10.17 D(x): 0.89 D(G(z)): 0.68 / 0.16 | Acc 89.1 / 84.4\n[1/2][196/782] Loss_D: 1.79 Loss_G: 8.74 D(x): 0.57 D(G(z)): 0.29 / 0.51 | Acc 67.2 / 84.3\nsemi Test: [0/157]\t Prec@1 48.438 (48.438)\nsemi Test: [50/157]\t Prec@1 46.875 (51.685)\nsemi Test: [100/157]\t Prec@1 42.188 (51.887)\nsemi Test: [150/157]\t Prec@1 54.688 (52.028)\nsemi Test Prec@1 51.88\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 53.125 (48.652)\nsup Test: [100/157]\t Prec@1 46.875 (48.902)\nsup Test: [150/157]\t Prec@1 50.000 (48.458)\nsup Test Prec@1 48.43\n[1/2][197/782] Loss_D: 1.59 Loss_G: 10.11 D(x): 0.86 D(G(z)): 0.61 / 0.17 | Acc 96.9 / 84.4\n[1/2][198/782] Loss_D: 1.47 Loss_G: 9.51 D(x): 0.64 D(G(z)): 0.31 / 0.30 | Acc 89.1 / 84.4\n[1/2][199/782] Loss_D: 1.53 Loss_G: 9.29 D(x): 0.71 D(G(z)): 0.45 / 0.34 | Acc 89.1 / 84.4\n[1/2][200/782] Loss_D: 1.52 Loss_G: 9.61 D(x): 0.78 D(G(z)): 0.48 / 0.24 | Acc 89.1 / 84.4\n[1/2][201/782] Loss_D: 1.58 Loss_G: 8.80 D(x): 0.58 D(G(z)): 0.36 / 0.42 | Acc 90.6 / 84.5\n[1/2][202/782] Loss_D: 1.57 Loss_G: 9.93 D(x): 0.82 D(G(z)): 0.56 / 0.20 | Acc 89.1 / 84.5\n[1/2][203/782] Loss_D: 1.61 Loss_G: 8.90 D(x): 0.59 D(G(z)): 0.30 / 0.47 | Acc 79.7 / 84.5\n[1/2][204/782] Loss_D: 1.74 Loss_G: 9.67 D(x): 0.80 D(G(z)): 0.58 / 0.26 | Acc 89.1 / 84.5\n[1/2][205/782] Loss_D: 1.49 Loss_G: 9.16 D(x): 0.64 D(G(z)): 0.38 / 0.37 | Acc 93.8 / 84.5\n[1/2][206/782] Loss_D: 1.44 Loss_G: 9.74 D(x): 0.80 D(G(z)): 0.50 / 0.21 | Acc 93.8 / 84.6\nsemi Test: [0/157]\t Prec@1 53.125 (53.125)\nsemi Test: [50/157]\t Prec@1 51.562 (51.746)\nsemi Test: [100/157]\t Prec@1 51.562 (52.181)\nsemi Test: [150/157]\t Prec@1 53.125 (52.287)\nsemi Test Prec@1 52.16\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 51.562 (48.591)\nsup Test: [100/157]\t Prec@1 45.312 (49.056)\nsup Test: [150/157]\t Prec@1 51.562 (48.603)\nsup Test Prec@1 48.53\n[1/2][207/782] Loss_D: 1.54 Loss_G: 9.17 D(x): 0.68 D(G(z)): 0.34 / 0.37 | Acc 78.1 / 84.5\n[1/2][208/782] Loss_D: 1.33 Loss_G: 9.92 D(x): 0.82 D(G(z)): 0.49 / 0.20 | Acc 95.3 / 84.6\n[1/2][209/782] Loss_D: 1.49 Loss_G: 9.11 D(x): 0.68 D(G(z)): 0.35 / 0.37 | Acc 85.9 / 84.6\n[1/2][210/782] Loss_D: 1.64 Loss_G: 9.50 D(x): 0.76 D(G(z)): 0.52 / 0.27 | Acc 85.9 / 84.6\n[1/2][211/782] Loss_D: 1.65 Loss_G: 8.91 D(x): 0.63 D(G(z)): 0.39 / 0.39 | Acc 85.9 / 84.6\n[1/2][212/782] Loss_D: 1.54 Loss_G: 9.71 D(x): 0.74 D(G(z)): 0.48 / 0.26 | Acc 89.1 / 84.6\n[1/2][213/782] Loss_D: 1.31 Loss_G: 9.50 D(x): 0.74 D(G(z)): 0.39 / 0.26 | Acc 87.5 / 84.6\n[1/2][214/782] Loss_D: 1.40 Loss_G: 9.17 D(x): 0.69 D(G(z)): 0.38 / 0.33 | Acc 87.5 / 84.7\n[1/2][215/782] Loss_D: 1.29 Loss_G: 9.73 D(x): 0.75 D(G(z)): 0.46 / 0.22 | Acc 98.4 / 84.7\n[1/2][216/782] Loss_D: 1.46 Loss_G: 9.09 D(x): 0.68 D(G(z)): 0.35 / 0.40 | Acc 84.4 / 84.7\nsemi Test: [0/157]\t Prec@1 42.188 (42.188)\nsemi Test: [50/157]\t Prec@1 54.688 (51.838)\nsemi Test: [100/157]\t Prec@1 48.438 (51.996)\nsemi Test: [150/157]\t Prec@1 48.438 (52.566)\nsemi Test Prec@1 52.33\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 53.125 (48.744)\nsup Test: [100/157]\t Prec@1 46.875 (49.211)\nsup Test: [150/157]\t Prec@1 51.562 (48.882)\nsup Test Prec@1 48.77\n[1/2][217/782] Loss_D: 1.56 Loss_G: 10.00 D(x): 0.78 D(G(z)): 0.56 / 0.19 | Acc 96.9 / 84.8\n[1/2][218/782] Loss_D: 1.65 Loss_G: 8.65 D(x): 0.62 D(G(z)): 0.29 / 0.55 | Acc 71.9 / 84.7\n[1/2][219/782] Loss_D: 1.89 Loss_G: 9.88 D(x): 0.81 D(G(z)): 0.66 / 0.18 | Acc 93.8 / 84.8\n[1/2][220/782] Loss_D: 1.56 Loss_G: 8.84 D(x): 0.61 D(G(z)): 0.27 / 0.46 | Acc 70.3 / 84.7\n[1/2][221/782] Loss_D: 1.61 Loss_G: 9.68 D(x): 0.80 D(G(z)): 0.57 / 0.22 | Acc 89.1 / 84.7\n[1/2][222/782] Loss_D: 1.31 Loss_G: 9.42 D(x): 0.65 D(G(z)): 0.34 / 0.29 | Acc 95.3 / 84.8\n[1/2][223/782] Loss_D: 1.38 Loss_G: 9.37 D(x): 0.76 D(G(z)): 0.42 / 0.29 | Acc 89.1 / 84.8\n[1/2][224/782] Loss_D: 1.42 Loss_G: 9.72 D(x): 0.74 D(G(z)): 0.45 / 0.23 | Acc 90.6 / 84.8\n[1/2][225/782] Loss_D: 1.47 Loss_G: 8.95 D(x): 0.66 D(G(z)): 0.34 / 0.43 | Acc 81.2 / 84.8\n[1/2][226/782] Loss_D: 1.54 Loss_G: 9.62 D(x): 0.71 D(G(z)): 0.53 / 0.23 | Acc 95.3 / 84.8\nsemi Test: [0/157]\t Prec@1 45.312 (45.312)\nsemi Test: [50/157]\t Prec@1 50.000 (51.532)\nsemi Test: [100/157]\t Prec@1 48.438 (51.547)\nsemi Test: [150/157]\t Prec@1 53.125 (51.687)\nsemi Test Prec@1 51.47\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 54.688 (49.081)\nsup Test: [100/157]\t Prec@1 45.312 (49.211)\nsup Test: [150/157]\t Prec@1 53.125 (48.872)\nsup Test Prec@1 48.78\n[1/2][227/782] Loss_D: 1.61 Loss_G: 8.99 D(x): 0.67 D(G(z)): 0.37 / 0.39 | Acc 79.7 / 84.8\n[1/2][228/782] Loss_D: 1.53 Loss_G: 9.69 D(x): 0.73 D(G(z)): 0.52 / 0.24 | Acc 95.3 / 84.9\n[1/2][229/782] Loss_D: 1.68 Loss_G: 8.86 D(x): 0.62 D(G(z)): 0.36 / 0.45 | Acc 81.2 / 84.8\n[1/2][230/782] Loss_D: 1.52 Loss_G: 10.11 D(x): 0.83 D(G(z)): 0.56 / 0.15 | Acc 89.1 / 84.9\n[1/2][231/782] Loss_D: 1.75 Loss_G: 8.42 D(x): 0.50 D(G(z)): 0.28 / 0.57 | Acc 82.8 / 84.8\n[1/2][232/782] Loss_D: 1.90 Loss_G: 10.27 D(x): 0.90 D(G(z)): 0.68 / 0.13 | Acc 89.1 / 84.9\n[1/2][233/782] Loss_D: 1.71 Loss_G: 8.86 D(x): 0.48 D(G(z)): 0.24 / 0.50 | Acc 84.4 / 84.9\n[1/2][234/782] Loss_D: 1.71 Loss_G: 9.78 D(x): 0.87 D(G(z)): 0.63 / 0.19 | Acc 95.3 / 84.9\n[1/2][235/782] Loss_D: 1.55 Loss_G: 8.86 D(x): 0.61 D(G(z)): 0.29 / 0.42 | Acc 82.8 / 84.9\n[1/2][236/782] Loss_D: 1.54 Loss_G: 9.86 D(x): 0.83 D(G(z)): 0.59 / 0.19 | Acc 96.9 / 84.9\nsemi Test: [0/157]\t Prec@1 46.875 (46.875)\nsemi Test: [50/157]\t Prec@1 51.562 (51.440)\nsemi Test: [100/157]\t Prec@1 39.062 (50.820)\nsemi Test: [150/157]\t Prec@1 51.562 (51.107)\nsemi Test Prec@1 50.88\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 54.688 (48.652)\nsup Test: [100/157]\t Prec@1 45.312 (48.824)\nsup Test: [150/157]\t Prec@1 53.125 (48.582)\nsup Test Prec@1 48.50\n[1/2][237/782] Loss_D: 1.57 Loss_G: 8.72 D(x): 0.58 D(G(z)): 0.31 / 0.48 | Acc 85.9 / 85.0\n[1/2][238/782] Loss_D: 1.77 Loss_G: 9.25 D(x): 0.77 D(G(z)): 0.60 / 0.29 | Acc 90.6 / 85.0\n[1/2][239/782] Loss_D: 1.46 Loss_G: 9.52 D(x): 0.73 D(G(z)): 0.44 / 0.23 | Acc 92.2 / 85.0\n[1/2][240/782] Loss_D: 1.69 Loss_G: 8.58 D(x): 0.58 D(G(z)): 0.38 / 0.56 | Acc 87.5 / 85.0\n[1/2][241/782] Loss_D: 1.77 Loss_G: 9.74 D(x): 0.85 D(G(z)): 0.67 / 0.17 | Acc 95.3 / 85.1\n[1/2][242/782] Loss_D: 1.89 Loss_G: 8.62 D(x): 0.43 D(G(z)): 0.27 / 0.53 | Acc 85.9 / 85.1\n[1/2][243/782] Loss_D: 1.67 Loss_G: 9.51 D(x): 0.82 D(G(z)): 0.62 / 0.25 | Acc 96.9 / 85.1\n[1/2][244/782] Loss_D: 1.50 Loss_G: 9.37 D(x): 0.66 D(G(z)): 0.37 / 0.27 | Acc 90.6 / 85.1\n[1/2][245/782] Loss_D: 1.38 Loss_G: 8.92 D(x): 0.72 D(G(z)): 0.36 / 0.42 | Acc 82.8 / 85.1\n[1/2][246/782] Loss_D: 1.39 Loss_G: 9.67 D(x): 0.83 D(G(z)): 0.50 / 0.21 | Acc 92.2 / 85.2\nsemi Test: [0/157]\t Prec@1 43.750 (43.750)\nsemi Test: [50/157]\t Prec@1 46.875 (51.409)\nsemi Test: [100/157]\t Prec@1 42.188 (51.423)\nsemi Test: [150/157]\t Prec@1 59.375 (51.604)\nsemi Test Prec@1 51.46\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 53.125 (48.683)\nsup Test: [100/157]\t Prec@1 45.312 (48.948)\nsup Test: [150/157]\t Prec@1 53.125 (48.665)\nsup Test Prec@1 48.62\n[1/2][247/782] Loss_D: 1.32 Loss_G: 9.11 D(x): 0.71 D(G(z)): 0.32 / 0.34 | Acc 84.4 / 85.1\n[1/2][248/782] Loss_D: 1.61 Loss_G: 8.92 D(x): 0.67 D(G(z)): 0.46 / 0.39 | Acc 90.6 / 85.2\n[1/2][249/782] Loss_D: 1.58 Loss_G: 9.31 D(x): 0.77 D(G(z)): 0.53 / 0.24 | Acc 89.1 / 85.2\n[1/2][250/782] Loss_D: 1.52 Loss_G: 8.68 D(x): 0.60 D(G(z)): 0.36 / 0.43 | Acc 89.1 / 85.2\n[1/2][251/782] Loss_D: 1.67 Loss_G: 9.07 D(x): 0.68 D(G(z)): 0.55 / 0.33 | Acc 95.3 / 85.2\n[1/2][252/782] Loss_D: 1.62 Loss_G: 8.93 D(x): 0.63 D(G(z)): 0.44 / 0.36 | Acc 89.1 / 85.3\n[1/2][253/782] Loss_D: 1.68 Loss_G: 8.93 D(x): 0.66 D(G(z)): 0.48 / 0.42 | Acc 87.5 / 85.3\n[1/2][254/782] Loss_D: 1.64 Loss_G: 9.11 D(x): 0.71 D(G(z)): 0.52 / 0.32 | Acc 93.8 / 85.3\n[1/2][255/782] Loss_D: 1.40 Loss_G: 9.27 D(x): 0.74 D(G(z)): 0.44 / 0.27 | Acc 87.5 / 85.3\n[1/2][256/782] Loss_D: 1.40 Loss_G: 8.85 D(x): 0.67 D(G(z)): 0.36 / 0.38 | Acc 87.5 / 85.3\nsemi Test: [0/157]\t Prec@1 45.312 (45.312)\nsemi Test: [50/157]\t Prec@1 50.000 (50.766)\nsemi Test: [100/157]\t Prec@1 50.000 (50.495)\nsemi Test: [150/157]\t Prec@1 46.875 (50.849)\nsemi Test Prec@1 50.56\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 54.688 (48.744)\nsup Test: [100/157]\t Prec@1 46.875 (49.180)\nsup Test: [150/157]\t Prec@1 53.125 (48.872)\nsup Test Prec@1 48.81\n[1/2][257/782] Loss_D: 1.61 Loss_G: 9.32 D(x): 0.77 D(G(z)): 0.51 / 0.29 | Acc 84.4 / 85.3\n[1/2][258/782] Loss_D: 1.46 Loss_G: 8.98 D(x): 0.70 D(G(z)): 0.39 / 0.35 | Acc 89.1 / 85.3\n[1/2][259/782] Loss_D: 1.38 Loss_G: 8.90 D(x): 0.72 D(G(z)): 0.43 / 0.33 | Acc 93.8 / 85.4\n[1/2][260/782] Loss_D: 1.38 Loss_G: 9.50 D(x): 0.78 D(G(z)): 0.46 / 0.21 | Acc 92.2 / 85.4\n[1/2][261/782] Loss_D: 1.67 Loss_G: 8.49 D(x): 0.64 D(G(z)): 0.32 / 0.58 | Acc 70.3 / 85.3\n[1/2][262/782] Loss_D: 1.77 Loss_G: 9.85 D(x): 0.87 D(G(z)): 0.68 / 0.16 | Acc 98.4 / 85.4\n[1/2][263/782] Loss_D: 1.84 Loss_G: 8.35 D(x): 0.51 D(G(z)): 0.24 / 0.63 | Acc 70.3 / 85.3\n[1/2][264/782] Loss_D: 1.98 Loss_G: 9.07 D(x): 0.84 D(G(z)): 0.70 / 0.28 | Acc 92.2 / 85.3\n[1/2][265/782] Loss_D: 1.44 Loss_G: 9.11 D(x): 0.62 D(G(z)): 0.38 / 0.30 | Acc 93.8 / 85.4\n[1/2][266/782] Loss_D: 1.46 Loss_G: 8.80 D(x): 0.69 D(G(z)): 0.43 / 0.37 | Acc 89.1 / 85.4\nsemi Test: [0/157]\t Prec@1 46.875 (46.875)\nsemi Test: [50/157]\t Prec@1 56.250 (52.237)\nsemi Test: [100/157]\t Prec@1 51.562 (52.676)\nsemi Test: [150/157]\t Prec@1 57.812 (53.084)\nsemi Test Prec@1 52.85\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 54.688 (48.866)\nsup Test: [100/157]\t Prec@1 46.875 (49.242)\nsup Test: [150/157]\t Prec@1 51.562 (48.965)\nsup Test Prec@1 48.90\n[1/2][267/782] Loss_D: 1.85 Loss_G: 8.61 D(x): 0.65 D(G(z)): 0.47 / 0.45 | Acc 87.5 / 85.4\n[1/2][268/782] Loss_D: 1.56 Loss_G: 9.14 D(x): 0.74 D(G(z)): 0.54 / 0.26 | Acc 96.9 / 85.4\n[1/2][269/782] Loss_D: 1.50 Loss_G: 8.98 D(x): 0.70 D(G(z)): 0.36 / 0.34 | Acc 84.4 / 85.4\n[1/2][270/782] Loss_D: 1.50 Loss_G: 8.91 D(x): 0.71 D(G(z)): 0.47 / 0.33 | Acc 87.5 / 85.4\n[1/2][271/782] Loss_D: 1.35 Loss_G: 9.33 D(x): 0.72 D(G(z)): 0.42 / 0.25 | Acc 95.3 / 85.5\n[1/2][272/782] Loss_D: 1.70 Loss_G: 8.34 D(x): 0.57 D(G(z)): 0.35 / 0.56 | Acc 82.8 / 85.5\n[1/2][273/782] Loss_D: 1.72 Loss_G: 9.58 D(x): 0.88 D(G(z)): 0.66 / 0.19 | Acc 95.3 / 85.5\n[1/2][274/782] Loss_D: 1.50 Loss_G: 8.99 D(x): 0.56 D(G(z)): 0.27 / 0.38 | Acc 81.2 / 85.5\n[1/2][275/782] Loss_D: 1.43 Loss_G: 9.11 D(x): 0.78 D(G(z)): 0.49 / 0.33 | Acc 90.6 / 85.5\n[1/2][276/782] Loss_D: 1.57 Loss_G: 9.17 D(x): 0.75 D(G(z)): 0.44 / 0.29 | Acc 89.1 / 85.5\nsemi Test: [0/157]\t Prec@1 48.438 (48.438)\nsemi Test: [50/157]\t Prec@1 56.250 (52.819)\nsemi Test: [100/157]\t Prec@1 53.125 (52.228)\nsemi Test: [150/157]\t Prec@1 54.688 (52.442)\nsemi Test Prec@1 52.12\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 53.125 (48.468)\nsup Test: [100/157]\t Prec@1 46.875 (49.025)\nsup Test: [150/157]\t Prec@1 50.000 (48.572)\nsup Test Prec@1 48.47\n[1/2][277/782] Loss_D: 1.47 Loss_G: 8.71 D(x): 0.70 D(G(z)): 0.41 / 0.40 | Acc 82.8 / 85.5\n[1/2][278/782] Loss_D: 1.66 Loss_G: 8.82 D(x): 0.71 D(G(z)): 0.49 / 0.37 | Acc 85.9 / 85.5\n[1/2][279/782] Loss_D: 1.53 Loss_G: 9.42 D(x): 0.70 D(G(z)): 0.52 / 0.26 | Acc 96.9 / 85.6\n[1/2][280/782] Loss_D: 1.52 Loss_G: 8.75 D(x): 0.66 D(G(z)): 0.38 / 0.37 | Acc 87.5 / 85.6\n[1/2][281/782] Loss_D: 1.74 Loss_G: 8.84 D(x): 0.65 D(G(z)): 0.52 / 0.41 | Acc 92.2 / 85.6\n[1/2][282/782] Loss_D: 1.52 Loss_G: 9.23 D(x): 0.72 D(G(z)): 0.51 / 0.24 | Acc 92.2 / 85.6\n[1/2][283/782] Loss_D: 1.60 Loss_G: 8.58 D(x): 0.61 D(G(z)): 0.33 / 0.46 | Acc 85.9 / 85.6\n[1/2][284/782] Loss_D: 1.55 Loss_G: 9.53 D(x): 0.77 D(G(z)): 0.58 / 0.23 | Acc 96.9 / 85.7\n[1/2][285/782] Loss_D: 1.41 Loss_G: 8.95 D(x): 0.68 D(G(z)): 0.35 / 0.32 | Acc 85.9 / 85.7\n[1/2][286/782] Loss_D: 1.51 Loss_G: 8.72 D(x): 0.69 D(G(z)): 0.43 / 0.37 | Acc 93.8 / 85.7\nsemi Test: [0/157]\t Prec@1 51.562 (51.562)\nsemi Test: [50/157]\t Prec@1 48.438 (53.217)\nsemi Test: [100/157]\t Prec@1 50.000 (52.614)\nsemi Test: [150/157]\t Prec@1 60.938 (52.949)\nsemi Test Prec@1 52.65\nsup Test: [0/157]\t Prec@1 43.750 (43.750)\nsup Test: [50/157]\t Prec@1 54.688 (48.744)\nsup Test: [100/157]\t Prec@1 45.312 (49.025)\nsup Test: [150/157]\t Prec@1 50.000 (48.727)\nsup Test Prec@1 48.62\n[1/2][287/782] Loss_D: 1.39 Loss_G: 9.80 D(x): 0.84 D(G(z)): 0.50 / 0.16 | Acc 89.1 / 85.7\n[1/2][288/782] Loss_D: 1.74 Loss_G: 8.20 D(x): 0.54 D(G(z)): 0.26 / 0.61 | Acc 68.8 / 85.6\n[1/2][289/782] Loss_D: 1.76 Loss_G: 9.56 D(x): 0.90 D(G(z)): 0.69 / 0.18 | Acc 95.3 / 85.7\n[1/2][290/782] Loss_D: 1.67 Loss_G: 8.36 D(x): 0.56 D(G(z)): 0.28 / 0.49 | Acc 71.9 / 85.6\n[1/2][291/782] Loss_D: 1.51 Loss_G: 9.59 D(x): 0.83 D(G(z)): 0.58 / 0.20 | Acc 96.9 / 85.7\n[1/2][292/782] Loss_D: 1.59 Loss_G: 8.61 D(x): 0.61 D(G(z)): 0.30 / 0.46 | Acc 78.1 / 85.6\n[1/2][293/782] Loss_D: 1.57 Loss_G: 9.11 D(x): 0.81 D(G(z)): 0.55 / 0.29 | Acc 90.6 / 85.7\n[1/2][294/782] Loss_D: 1.52 Loss_G: 8.80 D(x): 0.62 D(G(z)): 0.37 / 0.39 | Acc 87.5 / 85.7\n[1/2][295/782] Loss_D: 1.44 Loss_G: 9.04 D(x): 0.80 D(G(z)): 0.50 / 0.29 | Acc 90.6 / 85.7\n[1/2][296/782] Loss_D: 1.61 Loss_G: 8.36 D(x): 0.64 D(G(z)): 0.39 / 0.48 | Acc 84.4 / 85.7\nsemi Test: [0/157]\t Prec@1 48.438 (48.438)\nsemi Test: [50/157]\t Prec@1 48.438 (53.309)\nsemi Test: [100/157]\t Prec@1 54.688 (52.785)\nsemi Test: [150/157]\t Prec@1 56.250 (53.063)\nsemi Test Prec@1 52.84\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 54.688 (48.744)\nsup Test: [100/157]\t Prec@1 45.312 (49.180)\nsup Test: [150/157]\t Prec@1 54.688 (48.872)\nsup Test Prec@1 48.79\n[1/2][297/782] Loss_D: 1.55 Loss_G: 9.18 D(x): 0.79 D(G(z)): 0.57 / 0.26 | Acc 96.9 / 85.7\n[1/2][298/782] Loss_D: 1.55 Loss_G: 8.63 D(x): 0.66 D(G(z)): 0.37 / 0.43 | Acc 81.2 / 85.7\n[1/2][299/782] Loss_D: 1.55 Loss_G: 9.16 D(x): 0.75 D(G(z)): 0.53 / 0.24 | Acc 96.9 / 85.7\n[1/2][300/782] Loss_D: 1.98 Loss_G: 8.22 D(x): 0.51 D(G(z)): 0.35 / 0.62 | Acc 78.1 / 85.7\n[1/2][301/782] Loss_D: 1.91 Loss_G: 9.40 D(x): 0.84 D(G(z)): 0.69 / 0.21 | Acc 93.8 / 85.7\n[1/2][302/782] Loss_D: 1.51 Loss_G: 8.83 D(x): 0.60 D(G(z)): 0.29 / 0.39 | Acc 82.8 / 85.7\n[1/2][303/782] Loss_D: 1.55 Loss_G: 8.74 D(x): 0.77 D(G(z)): 0.51 / 0.36 | Acc 87.5 / 85.7\n[1/2][304/782] Loss_D: 1.37 Loss_G: 8.92 D(x): 0.72 D(G(z)): 0.42 / 0.29 | Acc 93.8 / 85.8\n[1/2][305/782] Loss_D: 1.68 Loss_G: 8.43 D(x): 0.63 D(G(z)): 0.40 / 0.46 | Acc 87.5 / 85.8\n[1/2][306/782] Loss_D: 1.67 Loss_G: 9.10 D(x): 0.78 D(G(z)): 0.58 / 0.27 | Acc 93.8 / 85.8\nsemi Test: [0/157]\t Prec@1 53.125 (53.125)\nsemi Test: [50/157]\t Prec@1 48.438 (50.674)\nsemi Test: [100/157]\t Prec@1 50.000 (50.959)\nsemi Test: [150/157]\t Prec@1 48.438 (51.438)\nsemi Test Prec@1 51.22\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 56.250 (48.683)\nsup Test: [100/157]\t Prec@1 45.312 (49.149)\nsup Test: [150/157]\t Prec@1 53.125 (48.789)\nsup Test Prec@1 48.68\n[1/2][307/782] Loss_D: 1.59 Loss_G: 8.45 D(x): 0.63 D(G(z)): 0.36 / 0.53 | Acc 78.1 / 85.8\n[1/2][308/782] Loss_D: 1.62 Loss_G: 9.57 D(x): 0.82 D(G(z)): 0.62 / 0.19 | Acc 96.9 / 85.8\n[1/2][309/782] Loss_D: 1.58 Loss_G: 8.50 D(x): 0.57 D(G(z)): 0.28 / 0.46 | Acc 82.8 / 85.8\n[1/2][310/782] Loss_D: 1.50 Loss_G: 8.76 D(x): 0.74 D(G(z)): 0.51 / 0.35 | Acc 95.3 / 85.8\n[1/2][311/782] Loss_D: 1.41 Loss_G: 9.21 D(x): 0.76 D(G(z)): 0.47 / 0.24 | Acc 92.2 / 85.8\n[1/2][312/782] Loss_D: 1.58 Loss_G: 8.26 D(x): 0.58 D(G(z)): 0.36 / 0.49 | Acc 92.2 / 85.9\n[1/2][313/782] Loss_D: 1.48 Loss_G: 9.21 D(x): 0.81 D(G(z)): 0.54 / 0.23 | Acc 90.6 / 85.9\n[1/2][314/782] Loss_D: 1.43 Loss_G: 8.74 D(x): 0.62 D(G(z)): 0.33 / 0.41 | Acc 92.2 / 85.9\n[1/2][315/782] Loss_D: 1.62 Loss_G: 9.02 D(x): 0.74 D(G(z)): 0.54 / 0.29 | Acc 90.6 / 85.9\n[1/2][316/782] Loss_D: 1.40 Loss_G: 8.86 D(x): 0.70 D(G(z)): 0.40 / 0.32 | Acc 87.5 / 85.9\nsemi Test: [0/157]\t Prec@1 50.000 (50.000)\nsemi Test: [50/157]\t Prec@1 43.750 (52.420)\nsemi Test: [100/157]\t Prec@1 57.812 (52.336)\nsemi Test: [150/157]\t Prec@1 48.438 (52.732)\nsemi Test Prec@1 52.44\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 54.688 (48.805)\nsup Test: [100/157]\t Prec@1 45.312 (49.072)\nsup Test: [150/157]\t Prec@1 51.562 (48.820)\nsup Test Prec@1 48.71\n[1/2][317/782] Loss_D: 1.51 Loss_G: 8.77 D(x): 0.69 D(G(z)): 0.44 / 0.38 | Acc 95.3 / 85.9\n[1/2][318/782] Loss_D: 1.63 Loss_G: 9.02 D(x): 0.73 D(G(z)): 0.51 / 0.27 | Acc 90.6 / 86.0\n[1/2][319/782] Loss_D: 1.59 Loss_G: 8.33 D(x): 0.62 D(G(z)): 0.38 / 0.48 | Acc 85.9 / 86.0\n[1/2][320/782] Loss_D: 1.60 Loss_G: 9.16 D(x): 0.75 D(G(z)): 0.58 / 0.23 | Acc 95.3 / 86.0\n[1/2][321/782] Loss_D: 1.80 Loss_G: 8.24 D(x): 0.58 D(G(z)): 0.34 / 0.55 | Acc 76.6 / 86.0\n[1/2][322/782] Loss_D: 1.77 Loss_G: 9.44 D(x): 0.78 D(G(z)): 0.65 / 0.19 | Acc 95.3 / 86.0\n[1/2][323/782] Loss_D: 1.55 Loss_G: 8.45 D(x): 0.57 D(G(z)): 0.29 / 0.40 | Acc 92.2 / 86.0\n[1/2][324/782] Loss_D: 1.50 Loss_G: 9.22 D(x): 0.75 D(G(z)): 0.55 / 0.23 | Acc 96.9 / 86.0\n[1/2][325/782] Loss_D: 1.46 Loss_G: 8.43 D(x): 0.59 D(G(z)): 0.35 / 0.42 | Acc 90.6 / 86.1\n[1/2][326/782] Loss_D: 1.64 Loss_G: 9.09 D(x): 0.80 D(G(z)): 0.56 / 0.29 | Acc 87.5 / 86.1\nsemi Test: [0/157]\t Prec@1 53.125 (53.125)\nsemi Test: [50/157]\t Prec@1 45.312 (51.777)\nsemi Test: [100/157]\t Prec@1 59.375 (51.795)\nsemi Test: [150/157]\t Prec@1 48.438 (52.142)\nsemi Test Prec@1 51.98\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 53.125 (48.499)\nsup Test: [100/157]\t Prec@1 45.312 (48.994)\nsup Test: [150/157]\t Prec@1 51.562 (48.779)\nsup Test Prec@1 48.66\n[1/2][327/782] Loss_D: 1.59 Loss_G: 8.68 D(x): 0.60 D(G(z)): 0.40 / 0.36 | Acc 89.1 / 86.1\n[1/2][328/782] Loss_D: 1.53 Loss_G: 8.75 D(x): 0.70 D(G(z)): 0.48 / 0.34 | Acc 89.1 / 86.1\n[1/2][329/782] Loss_D: 1.50 Loss_G: 8.89 D(x): 0.68 D(G(z)): 0.46 / 0.28 | Acc 92.2 / 86.1\n[1/2][330/782] Loss_D: 1.65 Loss_G: 8.28 D(x): 0.61 D(G(z)): 0.38 / 0.48 | Acc 89.1 / 86.1\n[1/2][331/782] Loss_D: 1.59 Loss_G: 9.31 D(x): 0.83 D(G(z)): 0.58 / 0.19 | Acc 87.5 / 86.1\n[1/2][332/782] Loss_D: 1.66 Loss_G: 8.29 D(x): 0.56 D(G(z)): 0.30 / 0.50 | Acc 84.4 / 86.1\n[1/2][333/782] Loss_D: 1.62 Loss_G: 9.43 D(x): 0.84 D(G(z)): 0.59 / 0.22 | Acc 93.8 / 86.1\n[1/2][334/782] Loss_D: 1.56 Loss_G: 8.55 D(x): 0.59 D(G(z)): 0.34 / 0.39 | Acc 90.6 / 86.1\n[1/2][335/782] Loss_D: 1.60 Loss_G: 8.99 D(x): 0.81 D(G(z)): 0.53 / 0.25 | Acc 87.5 / 86.1\n[1/2][336/782] Loss_D: 1.58 Loss_G: 8.23 D(x): 0.60 D(G(z)): 0.36 / 0.45 | Acc 89.1 / 86.2\nsemi Test: [0/157]\t Prec@1 48.438 (48.438)\nsemi Test: [50/157]\t Prec@1 56.250 (51.379)\nsemi Test: [100/157]\t Prec@1 59.375 (51.686)\nsemi Test: [150/157]\t Prec@1 54.688 (52.183)\nsemi Test Prec@1 52.00\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 51.562 (48.376)\nsup Test: [100/157]\t Prec@1 45.312 (48.886)\nsup Test: [150/157]\t Prec@1 51.562 (48.603)\nsup Test Prec@1 48.51\n[1/2][337/782] Loss_D: 1.70 Loss_G: 8.65 D(x): 0.72 D(G(z)): 0.56 / 0.33 | Acc 92.2 / 86.2\n[1/2][338/782] Loss_D: 1.56 Loss_G: 8.86 D(x): 0.68 D(G(z)): 0.47 / 0.32 | Acc 90.6 / 86.2\n[1/2][339/782] Loss_D: 1.51 Loss_G: 8.55 D(x): 0.70 D(G(z)): 0.42 / 0.39 | Acc 90.6 / 86.2\n[1/2][340/782] Loss_D: 1.44 Loss_G: 8.70 D(x): 0.74 D(G(z)): 0.47 / 0.30 | Acc 96.9 / 86.2\n[1/2][341/782] Loss_D: 1.77 Loss_G: 8.02 D(x): 0.56 D(G(z)): 0.42 / 0.49 | Acc 89.1 / 86.2\n[1/2][342/782] Loss_D: 1.62 Loss_G: 9.10 D(x): 0.76 D(G(z)): 0.59 / 0.25 | Acc 98.4 / 86.3\n[1/2][343/782] Loss_D: 1.96 Loss_G: 8.04 D(x): 0.57 D(G(z)): 0.33 / 0.57 | Acc 75.0 / 86.2\n[1/2][344/782] Loss_D: 1.72 Loss_G: 9.40 D(x): 0.85 D(G(z)): 0.67 / 0.19 | Acc 95.3 / 86.3\n[1/2][345/782] Loss_D: 1.66 Loss_G: 8.29 D(x): 0.56 D(G(z)): 0.27 / 0.49 | Acc 82.8 / 86.3\n[1/2][346/782] Loss_D: 1.49 Loss_G: 8.82 D(x): 0.84 D(G(z)): 0.58 / 0.25 | Acc 95.3 / 86.3\nsemi Test: [0/157]\t Prec@1 50.000 (50.000)\nsemi Test: [50/157]\t Prec@1 56.250 (51.808)\nsemi Test: [100/157]\t Prec@1 56.250 (52.413)\nsemi Test: [150/157]\t Prec@1 51.562 (52.887)\nsemi Test Prec@1 52.70\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 54.688 (48.468)\nsup Test: [100/157]\t Prec@1 45.312 (48.840)\nsup Test: [150/157]\t Prec@1 53.125 (48.500)\nsup Test Prec@1 48.41\n[1/2][347/782] Loss_D: 1.69 Loss_G: 8.48 D(x): 0.63 D(G(z)): 0.35 / 0.46 | Acc 82.8 / 86.3\n[1/2][348/782] Loss_D: 1.71 Loss_G: 8.76 D(x): 0.73 D(G(z)): 0.56 / 0.34 | Acc 87.5 / 86.3\n[1/2][349/782] Loss_D: 1.52 Loss_G: 8.59 D(x): 0.74 D(G(z)): 0.44 / 0.30 | Acc 87.5 / 86.3\n[1/2][350/782] Loss_D: 1.52 Loss_G: 8.48 D(x): 0.65 D(G(z)): 0.42 / 0.39 | Acc 89.1 / 86.3\n[1/2][351/782] Loss_D: 1.55 Loss_G: 8.66 D(x): 0.72 D(G(z)): 0.49 / 0.35 | Acc 90.6 / 86.3\n[1/2][352/782] Loss_D: 1.67 Loss_G: 8.50 D(x): 0.70 D(G(z)): 0.46 / 0.35 | Acc 84.4 / 86.3\n[1/2][353/782] Loss_D: 1.66 Loss_G: 8.49 D(x): 0.68 D(G(z)): 0.47 / 0.37 | Acc 87.5 / 86.3\n[1/2][354/782] Loss_D: 1.74 Loss_G: 8.66 D(x): 0.65 D(G(z)): 0.49 / 0.39 | Acc 87.5 / 86.3\n[1/2][355/782] Loss_D: 1.60 Loss_G: 8.72 D(x): 0.72 D(G(z)): 0.51 / 0.31 | Acc 95.3 / 86.3\n[1/2][356/782] Loss_D: 1.60 Loss_G: 8.65 D(x): 0.69 D(G(z)): 0.43 / 0.36 | Acc 84.4 / 86.3\nsemi Test: [0/157]\t Prec@1 40.625 (40.625)\nsemi Test: [50/157]\t Prec@1 53.125 (50.888)\nsemi Test: [100/157]\t Prec@1 54.688 (50.217)\nsemi Test: [150/157]\t Prec@1 46.875 (50.797)\nsemi Test Prec@1 50.67\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 53.125 (48.591)\nsup Test: [100/157]\t Prec@1 45.312 (48.902)\nsup Test: [150/157]\t Prec@1 53.125 (48.541)\nsup Test Prec@1 48.41\n[1/2][357/782] Loss_D: 1.65 Loss_G: 8.57 D(x): 0.70 D(G(z)): 0.49 / 0.35 | Acc 84.4 / 86.3\n[1/2][358/782] Loss_D: 1.67 Loss_G: 8.60 D(x): 0.64 D(G(z)): 0.46 / 0.37 | Acc 89.1 / 86.3\n[1/2][359/782] Loss_D: 1.45 Loss_G: 9.05 D(x): 0.80 D(G(z)): 0.49 / 0.22 | Acc 90.6 / 86.3\n[1/2][360/782] Loss_D: 1.38 Loss_G: 8.60 D(x): 0.66 D(G(z)): 0.34 / 0.36 | Acc 89.1 / 86.3\n[1/2][361/782] Loss_D: 1.55 Loss_G: 8.38 D(x): 0.70 D(G(z)): 0.47 / 0.40 | Acc 90.6 / 86.4\n[1/2][362/782] Loss_D: 1.44 Loss_G: 9.66 D(x): 0.83 D(G(z)): 0.54 / 0.14 | Acc 93.8 / 86.4\n[1/2][363/782] Loss_D: 1.98 Loss_G: 7.69 D(x): 0.49 D(G(z)): 0.23 / 0.73 | Acc 60.9 / 86.3\n[1/2][364/782] Loss_D: 2.21 Loss_G: 9.31 D(x): 0.91 D(G(z)): 0.79 / 0.19 | Acc 96.9 / 86.3\n[1/2][365/782] Loss_D: 1.91 Loss_G: 8.36 D(x): 0.54 D(G(z)): 0.31 / 0.43 | Acc 71.9 / 86.3\n[1/2][366/782] Loss_D: 1.66 Loss_G: 8.66 D(x): 0.78 D(G(z)): 0.52 / 0.35 | Acc 85.9 / 86.3\nsemi Test: [0/157]\t Prec@1 57.812 (57.812)\nsemi Test: [50/157]\t Prec@1 51.562 (52.512)\nsemi Test: [100/157]\t Prec@1 53.125 (52.614)\nsemi Test: [150/157]\t Prec@1 54.688 (52.794)\nsemi Test Prec@1 52.62\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 53.125 (48.591)\nsup Test: [100/157]\t Prec@1 45.312 (49.056)\nsup Test: [150/157]\t Prec@1 54.688 (48.675)\nsup Test Prec@1 48.56\n[1/2][367/782] Loss_D: 1.74 Loss_G: 8.82 D(x): 0.67 D(G(z)): 0.50 / 0.29 | Acc 89.1 / 86.3\n[1/2][368/782] Loss_D: 1.41 Loss_G: 8.52 D(x): 0.71 D(G(z)): 0.42 / 0.37 | Acc 92.2 / 86.3\n[1/2][369/782] Loss_D: 1.61 Loss_G: 8.58 D(x): 0.74 D(G(z)): 0.51 / 0.30 | Acc 87.5 / 86.3\n[1/2][370/782] Loss_D: 1.45 Loss_G: 8.46 D(x): 0.67 D(G(z)): 0.41 / 0.35 | Acc 93.8 / 86.3\n[1/2][371/782] Loss_D: 1.66 Loss_G: 8.32 D(x): 0.66 D(G(z)): 0.47 / 0.42 | Acc 92.2 / 86.4\n[1/2][372/782] Loss_D: 1.52 Loss_G: 9.21 D(x): 0.85 D(G(z)): 0.54 / 0.20 | Acc 89.1 / 86.4\n[1/2][373/782] Loss_D: 1.60 Loss_G: 8.07 D(x): 0.57 D(G(z)): 0.31 / 0.46 | Acc 79.7 / 86.3\n[1/2][374/782] Loss_D: 1.75 Loss_G: 8.75 D(x): 0.71 D(G(z)): 0.56 / 0.35 | Acc 89.1 / 86.3\n[1/2][375/782] Loss_D: 1.64 Loss_G: 8.42 D(x): 0.61 D(G(z)): 0.45 / 0.39 | Acc 90.6 / 86.4\n[1/2][376/782] Loss_D: 1.43 Loss_G: 9.05 D(x): 0.77 D(G(z)): 0.51 / 0.22 | Acc 93.8 / 86.4\nsemi Test: [0/157]\t Prec@1 43.750 (43.750)\nsemi Test: [50/157]\t Prec@1 50.000 (51.685)\nsemi Test: [100/157]\t Prec@1 53.125 (50.959)\nsemi Test: [150/157]\t Prec@1 51.562 (51.293)\nsemi Test Prec@1 51.18\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 51.562 (48.928)\nsup Test: [100/157]\t Prec@1 45.312 (49.211)\nsup Test: [150/157]\t Prec@1 53.125 (48.862)\nsup Test Prec@1 48.71\n[1/2][377/782] Loss_D: 1.63 Loss_G: 8.05 D(x): 0.63 D(G(z)): 0.34 / 0.52 | Acc 78.1 / 86.4\n[1/2][378/782] Loss_D: 1.66 Loss_G: 9.11 D(x): 0.81 D(G(z)): 0.62 / 0.20 | Acc 98.4 / 86.4\n[1/2][379/782] Loss_D: 1.47 Loss_G: 8.27 D(x): 0.59 D(G(z)): 0.32 / 0.45 | Acc 89.1 / 86.4\n[1/2][380/782] Loss_D: 1.49 Loss_G: 8.88 D(x): 0.81 D(G(z)): 0.55 / 0.25 | Acc 87.5 / 86.4\n[1/2][381/782] Loss_D: 1.64 Loss_G: 8.27 D(x): 0.56 D(G(z)): 0.36 / 0.43 | Acc 87.5 / 86.4\n[1/2][382/782] Loss_D: 1.50 Loss_G: 8.85 D(x): 0.79 D(G(z)): 0.52 / 0.27 | Acc 90.6 / 86.4\n[1/2][383/782] Loss_D: 1.47 Loss_G: 8.34 D(x): 0.63 D(G(z)): 0.36 / 0.44 | Acc 87.5 / 86.4\n[1/2][384/782] Loss_D: 1.57 Loss_G: 8.86 D(x): 0.78 D(G(z)): 0.56 / 0.24 | Acc 89.1 / 86.4\n[1/2][385/782] Loss_D: 1.58 Loss_G: 8.07 D(x): 0.56 D(G(z)): 0.34 / 0.46 | Acc 87.5 / 86.4\n[1/2][386/782] Loss_D: 1.67 Loss_G: 8.58 D(x): 0.75 D(G(z)): 0.56 / 0.31 | Acc 90.6 / 86.4\nsemi Test: [0/157]\t Prec@1 53.125 (53.125)\nsemi Test: [50/157]\t Prec@1 50.000 (51.072)\nsemi Test: [100/157]\t Prec@1 56.250 (50.820)\nsemi Test: [150/157]\t Prec@1 53.125 (50.869)\nsemi Test Prec@1 50.62\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 54.688 (48.775)\nsup Test: [100/157]\t Prec@1 45.312 (49.180)\nsup Test: [150/157]\t Prec@1 51.562 (48.872)\nsup Test Prec@1 48.75\n[1/2][387/782] Loss_D: 1.79 Loss_G: 7.90 D(x): 0.54 D(G(z)): 0.42 / 0.55 | Acc 89.1 / 86.4\n[1/2][388/782] Loss_D: 1.79 Loss_G: 9.28 D(x): 0.81 D(G(z)): 0.67 / 0.18 | Acc 96.9 / 86.5\n[1/2][389/782] Loss_D: 1.95 Loss_G: 7.66 D(x): 0.42 D(G(z)): 0.26 / 0.67 | Acc 82.8 / 86.5\n[1/2][390/782] Loss_D: 1.95 Loss_G: 9.12 D(x): 0.90 D(G(z)): 0.75 / 0.20 | Acc 98.4 / 86.5\n[1/2][391/782] Loss_D: 1.32 Loss_G: 8.81 D(x): 0.66 D(G(z)): 0.30 / 0.26 | Acc 85.9 / 86.5\n[1/2][392/782] Loss_D: 1.37 Loss_G: 8.29 D(x): 0.73 D(G(z)): 0.39 / 0.39 | Acc 93.8 / 86.5\n[1/2][393/782] Loss_D: 1.63 Loss_G: 8.37 D(x): 0.66 D(G(z)): 0.52 / 0.39 | Acc 95.3 / 86.5\n[1/2][394/782] Loss_D: 1.44 Loss_G: 8.85 D(x): 0.73 D(G(z)): 0.51 / 0.26 | Acc 96.9 / 86.6\n[1/2][395/782] Loss_D: 1.60 Loss_G: 8.01 D(x): 0.58 D(G(z)): 0.36 / 0.49 | Acc 84.4 / 86.6\n[1/2][396/782] Loss_D: 1.74 Loss_G: 8.42 D(x): 0.70 D(G(z)): 0.58 / 0.35 | Acc 93.8 / 86.6\nsemi Test: [0/157]\t Prec@1 50.000 (50.000)\nsemi Test: [50/157]\t Prec@1 43.750 (53.309)\nsemi Test: [100/157]\t Prec@1 57.812 (52.584)\nsemi Test: [150/157]\t Prec@1 53.125 (52.701)\nsemi Test Prec@1 52.52\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 54.688 (48.468)\nsup Test: [100/157]\t Prec@1 46.875 (48.948)\nsup Test: [150/157]\t Prec@1 54.688 (48.655)\nsup Test Prec@1 48.56\n[1/2][397/782] Loss_D: 1.41 Loss_G: 8.60 D(x): 0.69 D(G(z)): 0.43 / 0.26 | Acc 92.2 / 86.6\n[1/2][398/782] Loss_D: 1.42 Loss_G: 8.07 D(x): 0.65 D(G(z)): 0.36 / 0.43 | Acc 89.1 / 86.6\n[1/2][399/782] Loss_D: 1.50 Loss_G: 8.53 D(x): 0.73 D(G(z)): 0.54 / 0.32 | Acc 96.9 / 86.6\n[1/2][400/782] Loss_D: 1.48 Loss_G: 8.49 D(x): 0.71 D(G(z)): 0.44 / 0.35 | Acc 89.1 / 86.6\n[1/2][401/782] Loss_D: 1.70 Loss_G: 8.20 D(x): 0.61 D(G(z)): 0.47 / 0.45 | Acc 92.2 / 86.6\n[1/2][402/782] Loss_D: 1.65 Loss_G: 8.34 D(x): 0.69 D(G(z)): 0.53 / 0.34 | Acc 96.9 / 86.7\n[1/2][403/782] Loss_D: 1.61 Loss_G: 8.53 D(x): 0.67 D(G(z)): 0.44 / 0.36 | Acc 84.4 / 86.7\n[1/2][404/782] Loss_D: 1.53 Loss_G: 8.32 D(x): 0.69 D(G(z)): 0.47 / 0.39 | Acc 93.8 / 86.7\n[1/2][405/782] Loss_D: 1.58 Loss_G: 8.55 D(x): 0.69 D(G(z)): 0.49 / 0.30 | Acc 92.2 / 86.7\n[1/2][406/782] Loss_D: 1.31 Loss_G: 8.67 D(x): 0.69 D(G(z)): 0.38 / 0.29 | Acc 92.2 / 86.7\nsemi Test: [0/157]\t Prec@1 46.875 (46.875)\nsemi Test: [50/157]\t Prec@1 54.688 (53.248)\nsemi Test: [100/157]\t Prec@1 45.312 (52.367)\nsemi Test: [150/157]\t Prec@1 50.000 (52.494)\nsemi Test Prec@1 52.28\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 56.250 (48.591)\nsup Test: [100/157]\t Prec@1 45.312 (48.979)\nsup Test: [150/157]\t Prec@1 53.125 (48.665)\nsup Test Prec@1 48.59\n[1/2][407/782] Loss_D: 1.36 Loss_G: 8.47 D(x): 0.69 D(G(z)): 0.41 / 0.37 | Acc 95.3 / 86.7\n[1/2][408/782] Loss_D: 1.35 Loss_G: 8.61 D(x): 0.76 D(G(z)): 0.46 / 0.29 | Acc 93.8 / 86.7\n[1/2][409/782] Loss_D: 1.56 Loss_G: 8.20 D(x): 0.65 D(G(z)): 0.40 / 0.41 | Acc 85.9 / 86.7\n[1/2][410/782] Loss_D: 1.48 Loss_G: 8.73 D(x): 0.75 D(G(z)): 0.51 / 0.29 | Acc 92.2 / 86.8\n[1/2][411/782] Loss_D: 1.43 Loss_G: 8.17 D(x): 0.63 D(G(z)): 0.40 / 0.40 | Acc 98.4 / 86.8\n[1/2][412/782] Loss_D: 1.73 Loss_G: 8.03 D(x): 0.60 D(G(z)): 0.51 / 0.47 | Acc 95.3 / 86.8\n[1/2][413/782] Loss_D: 1.45 Loss_G: 9.26 D(x): 0.77 D(G(z)): 0.56 / 0.15 | Acc 100.0 / 86.8\n[1/2][414/782] Loss_D: 1.76 Loss_G: 7.68 D(x): 0.51 D(G(z)): 0.24 / 0.67 | Acc 76.6 / 86.8\n[1/2][415/782] Loss_D: 1.78 Loss_G: 9.39 D(x): 0.91 D(G(z)): 0.72 / 0.17 | Acc 98.4 / 86.8\n[1/2][416/782] Loss_D: 1.60 Loss_G: 7.94 D(x): 0.51 D(G(z)): 0.25 / 0.53 | Acc 79.7 / 86.8\nsemi Test: [0/157]\t Prec@1 50.000 (50.000)\nsemi Test: [50/157]\t Prec@1 48.438 (52.451)\nsemi Test: [100/157]\t Prec@1 54.688 (52.367)\nsemi Test: [150/157]\t Prec@1 53.125 (52.866)\nsemi Test Prec@1 52.64\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 56.250 (48.621)\nsup Test: [100/157]\t Prec@1 45.312 (48.917)\nsup Test: [150/157]\t Prec@1 51.562 (48.624)\nsup Test Prec@1 48.53\n[1/2][417/782] Loss_D: 1.63 Loss_G: 8.65 D(x): 0.84 D(G(z)): 0.60 / 0.27 | Acc 92.2 / 86.8\n[1/2][418/782] Loss_D: 1.26 Loss_G: 8.67 D(x): 0.72 D(G(z)): 0.37 / 0.27 | Acc 93.8 / 86.8\n[1/2][419/782] Loss_D: 1.52 Loss_G: 7.71 D(x): 0.61 D(G(z)): 0.37 / 0.54 | Acc 89.1 / 86.9\n[1/2][420/782] Loss_D: 1.78 Loss_G: 8.54 D(x): 0.75 D(G(z)): 0.62 / 0.31 | Acc 93.8 / 86.9\n[1/2][421/782] Loss_D: 1.64 Loss_G: 8.02 D(x): 0.59 D(G(z)): 0.40 / 0.44 | Acc 85.9 / 86.9\n[1/2][422/782] Loss_D: 1.46 Loss_G: 8.32 D(x): 0.75 D(G(z)): 0.51 / 0.32 | Acc 93.8 / 86.9\n[1/2][423/782] Loss_D: 1.59 Loss_G: 8.06 D(x): 0.59 D(G(z)): 0.41 / 0.44 | Acc 95.3 / 86.9\n[1/2][424/782] Loss_D: 1.52 Loss_G: 8.45 D(x): 0.76 D(G(z)): 0.54 / 0.29 | Acc 95.3 / 86.9\n[1/2][425/782] Loss_D: 1.63 Loss_G: 7.97 D(x): 0.56 D(G(z)): 0.38 / 0.46 | Acc 95.3 / 86.9\n[1/2][426/782] Loss_D: 1.29 Loss_G: 8.80 D(x): 0.83 D(G(z)): 0.53 / 0.23 | Acc 96.9 / 87.0\nsemi Test: [0/157]\t Prec@1 46.875 (46.875)\nsemi Test: [50/157]\t Prec@1 48.438 (52.328)\nsemi Test: [100/157]\t Prec@1 53.125 (52.475)\nsemi Test: [150/157]\t Prec@1 59.375 (53.135)\nsemi Test Prec@1 52.84\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 54.688 (48.621)\nsup Test: [100/157]\t Prec@1 46.875 (49.056)\nsup Test: [150/157]\t Prec@1 53.125 (48.707)\nsup Test Prec@1 48.59\n[1/2][427/782] Loss_D: 1.66 Loss_G: 7.94 D(x): 0.50 D(G(z)): 0.31 / 0.52 | Acc 90.6 / 87.0\n[1/2][428/782] Loss_D: 1.60 Loss_G: 8.43 D(x): 0.80 D(G(z)): 0.61 / 0.31 | Acc 95.3 / 87.0\n[1/2][429/782] Loss_D: 1.58 Loss_G: 8.17 D(x): 0.62 D(G(z)): 0.40 / 0.40 | Acc 87.5 / 87.0\n[1/2][430/782] Loss_D: 1.51 Loss_G: 8.40 D(x): 0.72 D(G(z)): 0.52 / 0.32 | Acc 93.8 / 87.0\n[1/2][431/782] Loss_D: 1.48 Loss_G: 8.11 D(x): 0.71 D(G(z)): 0.41 / 0.39 | Acc 79.7 / 87.0\n[1/2][432/782] Loss_D: 1.59 Loss_G: 8.20 D(x): 0.66 D(G(z)): 0.50 / 0.37 | Acc 92.2 / 87.0\n[1/2][433/782] Loss_D: 1.54 Loss_G: 8.47 D(x): 0.68 D(G(z)): 0.47 / 0.33 | Acc 95.3 / 87.0\n[1/2][434/782] Loss_D: 1.46 Loss_G: 8.12 D(x): 0.68 D(G(z)): 0.41 / 0.41 | Acc 93.8 / 87.0\n[1/2][435/782] Loss_D: 1.48 Loss_G: 8.72 D(x): 0.76 D(G(z)): 0.50 / 0.26 | Acc 92.2 / 87.1\n[1/2][436/782] Loss_D: 1.43 Loss_G: 8.26 D(x): 0.59 D(G(z)): 0.37 / 0.40 | Acc 96.9 / 87.1\nsemi Test: [0/157]\t Prec@1 39.062 (39.062)\nsemi Test: [50/157]\t Prec@1 46.875 (52.512)\nsemi Test: [100/157]\t Prec@1 53.125 (51.918)\nsemi Test: [150/157]\t Prec@1 50.000 (52.494)\nsemi Test Prec@1 52.22\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 53.125 (48.529)\nsup Test: [100/157]\t Prec@1 45.312 (48.793)\nsup Test: [150/157]\t Prec@1 54.688 (48.572)\nsup Test Prec@1 48.47\n[1/2][437/782] Loss_D: 1.34 Loss_G: 8.54 D(x): 0.83 D(G(z)): 0.50 / 0.22 | Acc 96.9 / 87.1\n[1/2][438/782] Loss_D: 1.61 Loss_G: 7.57 D(x): 0.57 D(G(z)): 0.32 / 0.54 | Acc 85.9 / 87.1\n[1/2][439/782] Loss_D: 1.85 Loss_G: 8.71 D(x): 0.78 D(G(z)): 0.65 / 0.23 | Acc 93.8 / 87.1\n[1/2][440/782] Loss_D: 2.13 Loss_G: 7.44 D(x): 0.49 D(G(z)): 0.33 / 0.69 | Acc 65.6 / 87.1\n[1/2][441/782] Loss_D: 1.99 Loss_G: 8.50 D(x): 0.85 D(G(z)): 0.75 / 0.28 | Acc 100.0 / 87.1\n[1/2][442/782] Loss_D: 1.47 Loss_G: 8.43 D(x): 0.67 D(G(z)): 0.39 / 0.28 | Acc 82.8 / 87.1\n[1/2][443/782] Loss_D: 1.53 Loss_G: 7.82 D(x): 0.60 D(G(z)): 0.37 / 0.49 | Acc 92.2 / 87.1\n[1/2][444/782] Loss_D: 1.64 Loss_G: 8.39 D(x): 0.78 D(G(z)): 0.60 / 0.30 | Acc 96.9 / 87.1\n[1/2][445/782] Loss_D: 1.45 Loss_G: 8.44 D(x): 0.62 D(G(z)): 0.40 / 0.33 | Acc 95.3 / 87.1\n[1/2][446/782] Loss_D: 1.60 Loss_G: 7.88 D(x): 0.66 D(G(z)): 0.45 / 0.44 | Acc 87.5 / 87.1\nsemi Test: [0/157]\t Prec@1 48.438 (48.438)\nsemi Test: [50/157]\t Prec@1 51.562 (50.950)\nsemi Test: [100/157]\t Prec@1 45.312 (50.387)\nsemi Test: [150/157]\t Prec@1 48.438 (50.683)\nsemi Test Prec@1 50.45\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 56.250 (48.376)\nsup Test: [100/157]\t Prec@1 46.875 (48.902)\nsup Test: [150/157]\t Prec@1 51.562 (48.613)\nsup Test Prec@1 48.51\n[1/2][447/782] Loss_D: 1.52 Loss_G: 8.31 D(x): 0.74 D(G(z)): 0.53 / 0.30 | Acc 93.8 / 87.1\n[1/2][448/782] Loss_D: 1.67 Loss_G: 7.80 D(x): 0.62 D(G(z)): 0.40 / 0.46 | Acc 78.1 / 87.1\n[1/2][449/782] Loss_D: 1.74 Loss_G: 8.25 D(x): 0.73 D(G(z)): 0.56 / 0.32 | Acc 89.1 / 87.1\n[1/2][450/782] Loss_D: 1.37 Loss_G: 8.34 D(x): 0.68 D(G(z)): 0.40 / 0.31 | Acc 93.8 / 87.1\n[1/2][451/782] Loss_D: 1.46 Loss_G: 8.31 D(x): 0.70 D(G(z)): 0.43 / 0.33 | Acc 90.6 / 87.2\n[1/2][452/782] Loss_D: 1.34 Loss_G: 8.80 D(x): 0.75 D(G(z)): 0.45 / 0.25 | Acc 93.8 / 87.2\n[1/2][453/782] Loss_D: 1.45 Loss_G: 7.92 D(x): 0.64 D(G(z)): 0.36 / 0.44 | Acc 87.5 / 87.2\n[1/2][454/782] Loss_D: 1.61 Loss_G: 8.26 D(x): 0.80 D(G(z)): 0.58 / 0.30 | Acc 92.2 / 87.2\n[1/2][455/782] Loss_D: 1.46 Loss_G: 8.16 D(x): 0.63 D(G(z)): 0.41 / 0.34 | Acc 95.3 / 87.2\n[1/2][456/782] Loss_D: 1.69 Loss_G: 7.82 D(x): 0.67 D(G(z)): 0.47 / 0.43 | Acc 87.5 / 87.2\nsemi Test: [0/157]\t Prec@1 46.875 (46.875)\nsemi Test: [50/157]\t Prec@1 51.562 (52.482)\nsemi Test: [100/157]\t Prec@1 46.875 (52.027)\nsemi Test: [150/157]\t Prec@1 54.688 (52.401)\nsemi Test Prec@1 52.19\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 54.688 (48.805)\nsup Test: [100/157]\t Prec@1 46.875 (49.165)\nsup Test: [150/157]\t Prec@1 51.562 (48.882)\nsup Test Prec@1 48.76\n[1/2][457/782] Loss_D: 1.55 Loss_G: 8.32 D(x): 0.68 D(G(z)): 0.52 / 0.31 | Acc 98.4 / 87.2\n[1/2][458/782] Loss_D: 1.59 Loss_G: 7.95 D(x): 0.64 D(G(z)): 0.41 / 0.42 | Acc 85.9 / 87.2\n[1/2][459/782] Loss_D: 1.52 Loss_G: 8.28 D(x): 0.73 D(G(z)): 0.55 / 0.30 | Acc 96.9 / 87.2\n[1/2][460/782] Loss_D: 1.46 Loss_G: 8.30 D(x): 0.70 D(G(z)): 0.43 / 0.36 | Acc 87.5 / 87.2\n[1/2][461/782] Loss_D: 1.57 Loss_G: 7.90 D(x): 0.66 D(G(z)): 0.45 / 0.43 | Acc 89.1 / 87.2\n[1/2][462/782] Loss_D: 1.45 Loss_G: 8.56 D(x): 0.76 D(G(z)): 0.54 / 0.22 | Acc 98.4 / 87.3\n[1/2][463/782] Loss_D: 1.79 Loss_G: 7.55 D(x): 0.53 D(G(z)): 0.32 / 0.59 | Acc 79.7 / 87.3\n[1/2][464/782] Loss_D: 1.82 Loss_G: 8.38 D(x): 0.83 D(G(z)): 0.67 / 0.24 | Acc 90.6 / 87.3\n[1/2][465/782] Loss_D: 1.48 Loss_G: 8.04 D(x): 0.57 D(G(z)): 0.34 / 0.41 | Acc 89.1 / 87.3\n[1/2][466/782] Loss_D: 1.42 Loss_G: 8.26 D(x): 0.75 D(G(z)): 0.52 / 0.33 | Acc 93.8 / 87.3\nsemi Test: [0/157]\t Prec@1 50.000 (50.000)\nsemi Test: [50/157]\t Prec@1 51.562 (53.431)\nsemi Test: [100/157]\t Prec@1 50.000 (52.413)\nsemi Test: [150/157]\t Prec@1 51.562 (53.177)\nsemi Test Prec@1 52.93\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 56.250 (48.744)\nsup Test: [100/157]\t Prec@1 46.875 (48.979)\nsup Test: [150/157]\t Prec@1 53.125 (48.810)\nsup Test Prec@1 48.68\n[1/2][467/782] Loss_D: 1.37 Loss_G: 8.51 D(x): 0.72 D(G(z)): 0.44 / 0.27 | Acc 92.2 / 87.3\n[1/2][468/782] Loss_D: 1.60 Loss_G: 7.64 D(x): 0.57 D(G(z)): 0.38 / 0.53 | Acc 87.5 / 87.3\n[1/2][469/782] Loss_D: 1.74 Loss_G: 8.29 D(x): 0.77 D(G(z)): 0.63 / 0.28 | Acc 93.8 / 87.3\n[1/2][470/782] Loss_D: 1.77 Loss_G: 7.66 D(x): 0.50 D(G(z)): 0.38 / 0.48 | Acc 92.2 / 87.3\n[1/2][471/782] Loss_D: 1.66 Loss_G: 8.20 D(x): 0.73 D(G(z)): 0.57 / 0.32 | Acc 95.3 / 87.3\n[1/2][472/782] Loss_D: 1.52 Loss_G: 8.13 D(x): 0.65 D(G(z)): 0.45 / 0.33 | Acc 93.8 / 87.3\n[1/2][473/782] Loss_D: 1.66 Loss_G: 7.81 D(x): 0.58 D(G(z)): 0.41 / 0.46 | Acc 90.6 / 87.4\n[1/2][474/782] Loss_D: 1.49 Loss_G: 8.55 D(x): 0.77 D(G(z)): 0.55 / 0.27 | Acc 100.0 / 87.4\n[1/2][475/782] Loss_D: 1.47 Loss_G: 7.98 D(x): 0.56 D(G(z)): 0.37 / 0.42 | Acc 95.3 / 87.4\n[1/2][476/782] Loss_D: 1.41 Loss_G: 8.66 D(x): 0.75 D(G(z)): 0.53 / 0.27 | Acc 95.3 / 87.4\nsemi Test: [0/157]\t Prec@1 50.000 (50.000)\nsemi Test: [50/157]\t Prec@1 45.312 (52.972)\nsemi Test: [100/157]\t Prec@1 43.750 (52.150)\nsemi Test: [150/157]\t Prec@1 53.125 (52.525)\nsemi Test Prec@1 52.31\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 56.250 (48.775)\nsup Test: [100/157]\t Prec@1 46.875 (49.196)\nsup Test: [150/157]\t Prec@1 51.562 (48.872)\nsup Test Prec@1 48.77\n[1/2][477/782] Loss_D: 1.90 Loss_G: 7.37 D(x): 0.49 D(G(z)): 0.39 / 0.60 | Acc 84.4 / 87.4\n[1/2][478/782] Loss_D: 1.78 Loss_G: 8.48 D(x): 0.82 D(G(z)): 0.68 / 0.23 | Acc 98.4 / 87.4\n[1/2][479/782] Loss_D: 1.46 Loss_G: 8.12 D(x): 0.61 D(G(z)): 0.32 / 0.35 | Acc 87.5 / 87.4\n[1/2][480/782] Loss_D: 1.60 Loss_G: 7.72 D(x): 0.62 D(G(z)): 0.44 / 0.49 | Acc 95.3 / 87.4\n[1/2][481/782] Loss_D: 1.58 Loss_G: 8.58 D(x): 0.80 D(G(z)): 0.58 / 0.24 | Acc 96.9 / 87.5\n[1/2][482/782] Loss_D: 1.62 Loss_G: 7.78 D(x): 0.55 D(G(z)): 0.37 / 0.46 | Acc 92.2 / 87.5\n[1/2][483/782] Loss_D: 1.52 Loss_G: 8.29 D(x): 0.77 D(G(z)): 0.54 / 0.28 | Acc 92.2 / 87.5\n[1/2][484/782] Loss_D: 1.52 Loss_G: 7.87 D(x): 0.60 D(G(z)): 0.38 / 0.41 | Acc 93.8 / 87.5\n[1/2][485/782] Loss_D: 1.47 Loss_G: 8.19 D(x): 0.73 D(G(z)): 0.50 / 0.32 | Acc 93.8 / 87.5\n[1/2][486/782] Loss_D: 1.43 Loss_G: 7.99 D(x): 0.64 D(G(z)): 0.42 / 0.38 | Acc 95.3 / 87.5\nsemi Test: [0/157]\t Prec@1 42.188 (42.188)\nsemi Test: [50/157]\t Prec@1 50.000 (52.972)\nsemi Test: [100/157]\t Prec@1 46.875 (53.171)\nsemi Test: [150/157]\t Prec@1 53.125 (53.622)\nsemi Test Prec@1 53.36\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 54.688 (48.897)\nsup Test: [100/157]\t Prec@1 46.875 (49.273)\nsup Test: [150/157]\t Prec@1 51.562 (48.934)\nsup Test Prec@1 48.81\n[1/2][487/782] Loss_D: 1.50 Loss_G: 8.21 D(x): 0.74 D(G(z)): 0.48 / 0.32 | Acc 90.6 / 87.5\n[1/2][488/782] Loss_D: 1.26 Loss_G: 8.42 D(x): 0.77 D(G(z)): 0.42 / 0.27 | Acc 93.8 / 87.5\n[1/2][489/782] Loss_D: 1.62 Loss_G: 7.61 D(x): 0.55 D(G(z)): 0.37 / 0.55 | Acc 90.6 / 87.6\n[1/2][490/782] Loss_D: 1.64 Loss_G: 8.50 D(x): 0.80 D(G(z)): 0.62 / 0.28 | Acc 96.9 / 87.6\n[1/2][491/782] Loss_D: 1.33 Loss_G: 8.22 D(x): 0.69 D(G(z)): 0.37 / 0.29 | Acc 92.2 / 87.6\n[1/2][492/782] Loss_D: 1.35 Loss_G: 7.80 D(x): 0.68 D(G(z)): 0.38 / 0.40 | Acc 92.2 / 87.6\n[1/2][493/782] Loss_D: 1.49 Loss_G: 8.37 D(x): 0.75 D(G(z)): 0.51 / 0.27 | Acc 89.1 / 87.6\n[1/2][494/782] Loss_D: 1.37 Loss_G: 8.31 D(x): 0.66 D(G(z)): 0.38 / 0.29 | Acc 92.2 / 87.6\n[1/2][495/782] Loss_D: 1.45 Loss_G: 7.70 D(x): 0.60 D(G(z)): 0.38 / 0.49 | Acc 95.3 / 87.6\n[1/2][496/782] Loss_D: 1.37 Loss_G: 8.71 D(x): 0.82 D(G(z)): 0.58 / 0.19 | Acc 98.4 / 87.6\nsemi Test: [0/157]\t Prec@1 43.750 (43.750)\nsemi Test: [50/157]\t Prec@1 48.438 (52.053)\nsemi Test: [100/157]\t Prec@1 43.750 (51.949)\nsemi Test: [150/157]\t Prec@1 53.125 (52.411)\nsemi Test Prec@1 52.30\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 56.250 (48.529)\nsup Test: [100/157]\t Prec@1 46.875 (48.963)\nsup Test: [150/157]\t Prec@1 51.562 (48.748)\nsup Test Prec@1 48.66\n[1/2][497/782] Loss_D: 1.47 Loss_G: 7.58 D(x): 0.57 D(G(z)): 0.28 / 0.50 | Acc 85.9 / 87.6\n[1/2][498/782] Loss_D: 1.66 Loss_G: 8.37 D(x): 0.80 D(G(z)): 0.60 / 0.27 | Acc 96.9 / 87.7\n[1/2][499/782] Loss_D: 1.32 Loss_G: 8.36 D(x): 0.69 D(G(z)): 0.37 / 0.32 | Acc 90.6 / 87.7\n[1/2][500/782] Loss_D: 1.42 Loss_G: 7.78 D(x): 0.65 D(G(z)): 0.40 / 0.45 | Acc 89.1 / 87.7\n[1/2][501/782] Loss_D: 1.53 Loss_G: 8.06 D(x): 0.67 D(G(z)): 0.53 / 0.34 | Acc 98.4 / 87.7\n[1/2][502/782] Loss_D: 1.42 Loss_G: 8.16 D(x): 0.71 D(G(z)): 0.44 / 0.29 | Acc 93.8 / 87.7\n[1/2][503/782] Loss_D: 1.47 Loss_G: 7.52 D(x): 0.63 D(G(z)): 0.38 / 0.48 | Acc 89.1 / 87.7\n[1/2][504/782] Loss_D: 1.44 Loss_G: 8.45 D(x): 0.77 D(G(z)): 0.55 / 0.23 | Acc 98.4 / 87.7\n[1/2][505/782] Loss_D: 1.45 Loss_G: 7.48 D(x): 0.55 D(G(z)): 0.30 / 0.54 | Acc 93.8 / 87.7\n[1/2][506/782] Loss_D: 1.51 Loss_G: 8.58 D(x): 0.83 D(G(z)): 0.61 / 0.20 | Acc 96.9 / 87.7\nsemi Test: [0/157]\t Prec@1 48.438 (48.438)\nsemi Test: [50/157]\t Prec@1 51.562 (53.186)\nsemi Test: [100/157]\t Prec@1 46.875 (52.584)\nsemi Test: [150/157]\t Prec@1 51.562 (53.094)\nsemi Test Prec@1 52.93\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 54.688 (48.683)\nsup Test: [100/157]\t Prec@1 46.875 (49.056)\nsup Test: [150/157]\t Prec@1 51.562 (48.758)\nsup Test Prec@1 48.64\n[1/2][507/782] Loss_D: 1.37 Loss_G: 7.88 D(x): 0.62 D(G(z)): 0.26 / 0.41 | Acc 85.9 / 87.7\n[1/2][508/782] Loss_D: 1.51 Loss_G: 7.92 D(x): 0.73 D(G(z)): 0.50 / 0.40 | Acc 92.2 / 87.8\n[1/2][509/782] Loss_D: 1.45 Loss_G: 8.24 D(x): 0.74 D(G(z)): 0.47 / 0.27 | Acc 92.2 / 87.8\n[1/2][510/782] Loss_D: 1.64 Loss_G: 7.54 D(x): 0.56 D(G(z)): 0.38 / 0.52 | Acc 85.9 / 87.8\n[1/2][511/782] Loss_D: 1.54 Loss_G: 8.31 D(x): 0.80 D(G(z)): 0.58 / 0.26 | Acc 98.4 / 87.8\n[1/2][512/782] Loss_D: 1.37 Loss_G: 7.92 D(x): 0.68 D(G(z)): 0.34 / 0.37 | Acc 87.5 / 87.8\n[1/2][513/782] Loss_D: 1.40 Loss_G: 7.84 D(x): 0.69 D(G(z)): 0.45 / 0.36 | Acc 95.3 / 87.8\n[1/2][514/782] Loss_D: 1.36 Loss_G: 7.99 D(x): 0.70 D(G(z)): 0.45 / 0.35 | Acc 95.3 / 87.8\n[1/2][515/782] Loss_D: 1.39 Loss_G: 7.84 D(x): 0.68 D(G(z)): 0.43 / 0.38 | Acc 95.3 / 87.8\n[1/2][516/782] Loss_D: 1.52 Loss_G: 7.87 D(x): 0.67 D(G(z)): 0.46 / 0.40 | Acc 90.6 / 87.8\nsemi Test: [0/157]\t Prec@1 43.750 (43.750)\nsemi Test: [50/157]\t Prec@1 46.875 (51.838)\nsemi Test: [100/157]\t Prec@1 43.750 (52.259)\nsemi Test: [150/157]\t Prec@1 53.125 (52.773)\nsemi Test Prec@1 52.48\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 56.250 (48.652)\nsup Test: [100/157]\t Prec@1 45.312 (49.149)\nsup Test: [150/157]\t Prec@1 51.562 (48.841)\nsup Test Prec@1 48.70\n[1/2][517/782] Loss_D: 1.56 Loss_G: 8.07 D(x): 0.67 D(G(z)): 0.50 / 0.36 | Acc 95.3 / 87.8\n[1/2][518/782] Loss_D: 1.43 Loss_G: 7.96 D(x): 0.71 D(G(z)): 0.45 / 0.33 | Acc 89.1 / 87.8\n[1/2][519/782] Loss_D: 1.46 Loss_G: 7.65 D(x): 0.63 D(G(z)): 0.40 / 0.40 | Acc 95.3 / 87.9\n[1/2][520/782] Loss_D: 1.54 Loss_G: 8.00 D(x): 0.71 D(G(z)): 0.51 / 0.32 | Acc 93.8 / 87.9\n[1/2][521/782] Loss_D: 1.41 Loss_G: 7.71 D(x): 0.61 D(G(z)): 0.38 / 0.46 | Acc 96.9 / 87.9\n[1/2][522/782] Loss_D: 1.46 Loss_G: 8.20 D(x): 0.76 D(G(z)): 0.55 / 0.27 | Acc 96.9 / 87.9\n[1/2][523/782] Loss_D: 1.58 Loss_G: 7.56 D(x): 0.59 D(G(z)): 0.35 / 0.47 | Acc 87.5 / 87.9\n[1/2][524/782] Loss_D: 1.48 Loss_G: 8.17 D(x): 0.77 D(G(z)): 0.56 / 0.26 | Acc 96.9 / 87.9\n[1/2][525/782] Loss_D: 1.54 Loss_G: 7.54 D(x): 0.59 D(G(z)): 0.35 / 0.50 | Acc 85.9 / 87.9\n[1/2][526/782] Loss_D: 1.49 Loss_G: 8.51 D(x): 0.80 D(G(z)): 0.58 / 0.23 | Acc 98.4 / 87.9\nsemi Test: [0/157]\t Prec@1 54.688 (54.688)\nsemi Test: [50/157]\t Prec@1 43.750 (53.278)\nsemi Test: [100/157]\t Prec@1 50.000 (53.496)\nsemi Test: [150/157]\t Prec@1 56.250 (53.994)\nsemi Test Prec@1 53.83\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 54.688 (48.836)\nsup Test: [100/157]\t Prec@1 45.312 (49.118)\nsup Test: [150/157]\t Prec@1 51.562 (48.820)\nsup Test Prec@1 48.68\n[1/2][527/782] Loss_D: 1.46 Loss_G: 7.42 D(x): 0.58 D(G(z)): 0.31 / 0.49 | Acc 87.5 / 87.9\n[1/2][528/782] Loss_D: 1.55 Loss_G: 7.98 D(x): 0.75 D(G(z)): 0.57 / 0.30 | Acc 98.4 / 88.0\n[1/2][529/782] Loss_D: 1.47 Loss_G: 7.83 D(x): 0.64 D(G(z)): 0.38 / 0.40 | Acc 93.8 / 88.0\n[1/2][530/782] Loss_D: 1.53 Loss_G: 7.88 D(x): 0.74 D(G(z)): 0.52 / 0.33 | Acc 93.8 / 88.0\n[1/2][531/782] Loss_D: 1.35 Loss_G: 8.15 D(x): 0.68 D(G(z)): 0.43 / 0.29 | Acc 96.9 / 88.0\n[1/2][532/782] Loss_D: 1.43 Loss_G: 7.71 D(x): 0.69 D(G(z)): 0.39 / 0.43 | Acc 87.5 / 88.0\n[1/2][533/782] Loss_D: 1.61 Loss_G: 7.97 D(x): 0.69 D(G(z)): 0.51 / 0.37 | Acc 93.8 / 88.0\n[1/2][534/782] Loss_D: 1.40 Loss_G: 8.35 D(x): 0.74 D(G(z)): 0.48 / 0.26 | Acc 93.8 / 88.0\n[1/2][535/782] Loss_D: 1.62 Loss_G: 7.43 D(x): 0.55 D(G(z)): 0.33 / 0.58 | Acc 87.5 / 88.0\n[1/2][536/782] Loss_D: 1.73 Loss_G: 8.85 D(x): 0.86 D(G(z)): 0.68 / 0.16 | Acc 98.4 / 88.0\nsemi Test: [0/157]\t Prec@1 51.562 (51.562)\nsemi Test: [50/157]\t Prec@1 43.750 (51.930)\nsemi Test: [100/157]\t Prec@1 42.188 (51.825)\nsemi Test: [150/157]\t Prec@1 57.812 (52.380)\nsemi Test Prec@1 52.24\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 56.250 (48.346)\nsup Test: [100/157]\t Prec@1 46.875 (49.072)\nsup Test: [150/157]\t Prec@1 51.562 (48.696)\nsup Test Prec@1 48.54\n[1/2][537/782] Loss_D: 1.92 Loss_G: 7.27 D(x): 0.47 D(G(z)): 0.25 / 0.65 | Acc 70.3 / 88.0\n[1/2][538/782] Loss_D: 1.82 Loss_G: 8.48 D(x): 0.86 D(G(z)): 0.71 / 0.24 | Acc 98.4 / 88.0\n[1/2][539/782] Loss_D: 1.75 Loss_G: 7.64 D(x): 0.55 D(G(z)): 0.37 / 0.47 | Acc 82.8 / 88.0\n[1/2][540/782] Loss_D: 1.49 Loss_G: 8.10 D(x): 0.79 D(G(z)): 0.55 / 0.29 | Acc 93.8 / 88.0\n[1/2][541/782] Loss_D: 1.47 Loss_G: 7.58 D(x): 0.61 D(G(z)): 0.37 / 0.44 | Acc 93.8 / 88.0\n[1/2][542/782] Loss_D: 1.61 Loss_G: 7.94 D(x): 0.71 D(G(z)): 0.54 / 0.36 | Acc 90.6 / 88.0\n[1/2][543/782] Loss_D: 1.50 Loss_G: 8.11 D(x): 0.72 D(G(z)): 0.46 / 0.30 | Acc 87.5 / 88.0\n[1/2][544/782] Loss_D: 1.55 Loss_G: 7.36 D(x): 0.58 D(G(z)): 0.41 / 0.52 | Acc 95.3 / 88.1\n[1/2][545/782] Loss_D: 1.44 Loss_G: 8.08 D(x): 0.76 D(G(z)): 0.57 / 0.27 | Acc 100.0 / 88.1\n[1/2][546/782] Loss_D: 1.57 Loss_G: 7.72 D(x): 0.60 D(G(z)): 0.39 / 0.40 | Acc 92.2 / 88.1\nsemi Test: [0/157]\t Prec@1 51.562 (51.562)\nsemi Test: [50/157]\t Prec@1 50.000 (53.431)\nsemi Test: [100/157]\t Prec@1 46.875 (53.728)\nsemi Test: [150/157]\t Prec@1 53.125 (54.170)\nsemi Test Prec@1 54.00\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 54.688 (48.254)\nsup Test: [100/157]\t Prec@1 46.875 (48.979)\nsup Test: [150/157]\t Prec@1 51.562 (48.675)\nsup Test Prec@1 48.55\n[1/2][547/782] Loss_D: 1.61 Loss_G: 7.70 D(x): 0.66 D(G(z)): 0.49 / 0.44 | Acc 92.2 / 88.1\n[1/2][548/782] Loss_D: 1.56 Loss_G: 8.14 D(x): 0.75 D(G(z)): 0.53 / 0.28 | Acc 90.6 / 88.1\n[1/2][549/782] Loss_D: 1.54 Loss_G: 7.43 D(x): 0.62 D(G(z)): 0.37 / 0.52 | Acc 85.9 / 88.1\n[1/2][550/782] Loss_D: 1.53 Loss_G: 8.04 D(x): 0.76 D(G(z)): 0.58 / 0.25 | Acc 96.9 / 88.1\n[1/2][551/782] Loss_D: 1.36 Loss_G: 7.89 D(x): 0.68 D(G(z)): 0.37 / 0.33 | Acc 89.1 / 88.1\n[1/2][552/782] Loss_D: 1.38 Loss_G: 7.71 D(x): 0.69 D(G(z)): 0.43 / 0.40 | Acc 95.3 / 88.1\n[1/2][553/782] Loss_D: 1.58 Loss_G: 7.87 D(x): 0.69 D(G(z)): 0.48 / 0.36 | Acc 92.2 / 88.1\n[1/2][554/782] Loss_D: 1.53 Loss_G: 7.97 D(x): 0.65 D(G(z)): 0.47 / 0.34 | Acc 90.6 / 88.1\n[1/2][555/782] Loss_D: 1.60 Loss_G: 7.48 D(x): 0.63 D(G(z)): 0.43 / 0.43 | Acc 87.5 / 88.1\n[1/2][556/782] Loss_D: 1.41 Loss_G: 8.30 D(x): 0.79 D(G(z)): 0.54 / 0.23 | Acc 95.3 / 88.1\nsemi Test: [0/157]\t Prec@1 51.562 (51.562)\nsemi Test: [50/157]\t Prec@1 54.688 (52.083)\nsemi Test: [100/157]\t Prec@1 43.750 (52.367)\nsemi Test: [150/157]\t Prec@1 46.875 (52.918)\nsemi Test Prec@1 52.76\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 56.250 (48.438)\nsup Test: [100/157]\t Prec@1 46.875 (49.025)\nsup Test: [150/157]\t Prec@1 51.562 (48.634)\nsup Test Prec@1 48.49\n[1/2][557/782] Loss_D: 1.69 Loss_G: 7.18 D(x): 0.53 D(G(z)): 0.32 / 0.60 | Acc 81.2 / 88.1\n[1/2][558/782] Loss_D: 1.84 Loss_G: 8.09 D(x): 0.82 D(G(z)): 0.68 / 0.25 | Acc 93.8 / 88.1\n[1/2][559/782] Loss_D: 1.37 Loss_G: 8.08 D(x): 0.61 D(G(z)): 0.35 / 0.30 | Acc 90.6 / 88.1\n[1/2][560/782] Loss_D: 1.41 Loss_G: 7.64 D(x): 0.64 D(G(z)): 0.40 / 0.46 | Acc 93.8 / 88.2\n[1/2][561/782] Loss_D: 1.48 Loss_G: 8.44 D(x): 0.81 D(G(z)): 0.57 / 0.21 | Acc 96.9 / 88.2\n[1/2][562/782] Loss_D: 1.61 Loss_G: 7.28 D(x): 0.51 D(G(z)): 0.29 / 0.56 | Acc 89.1 / 88.2\n[1/2][563/782] Loss_D: 1.74 Loss_G: 8.13 D(x): 0.78 D(G(z)): 0.64 / 0.27 | Acc 98.4 / 88.2\n[1/2][564/782] Loss_D: 1.55 Loss_G: 7.53 D(x): 0.58 D(G(z)): 0.38 / 0.38 | Acc 92.2 / 88.2\n[1/2][565/782] Loss_D: 1.35 Loss_G: 7.71 D(x): 0.73 D(G(z)): 0.46 / 0.33 | Acc 96.9 / 88.2\n[1/2][566/782] Loss_D: 1.40 Loss_G: 7.91 D(x): 0.68 D(G(z)): 0.44 / 0.33 | Acc 96.9 / 88.2\nsemi Test: [0/157]\t Prec@1 48.438 (48.438)\nsemi Test: [50/157]\t Prec@1 48.438 (51.961)\nsemi Test: [100/157]\t Prec@1 57.812 (52.135)\nsemi Test: [150/157]\t Prec@1 53.125 (52.659)\nsemi Test Prec@1 52.42\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 56.250 (48.621)\nsup Test: [100/157]\t Prec@1 46.875 (49.211)\nsup Test: [150/157]\t Prec@1 51.562 (48.820)\nsup Test Prec@1 48.70\n[1/2][567/782] Loss_D: 1.45 Loss_G: 7.60 D(x): 0.65 D(G(z)): 0.42 / 0.41 | Acc 87.5 / 88.2\n[1/2][568/782] Loss_D: 1.54 Loss_G: 7.94 D(x): 0.73 D(G(z)): 0.52 / 0.33 | Acc 90.6 / 88.2\n[1/2][569/782] Loss_D: 1.45 Loss_G: 7.73 D(x): 0.70 D(G(z)): 0.43 / 0.37 | Acc 87.5 / 88.2\n[1/2][570/782] Loss_D: 1.64 Loss_G: 7.50 D(x): 0.63 D(G(z)): 0.45 / 0.48 | Acc 87.5 / 88.2\n[1/2][571/782] Loss_D: 1.56 Loss_G: 8.07 D(x): 0.72 D(G(z)): 0.56 / 0.28 | Acc 100.0 / 88.2\n[1/2][572/782] Loss_D: 1.33 Loss_G: 7.74 D(x): 0.66 D(G(z)): 0.37 / 0.36 | Acc 95.3 / 88.3\n[1/2][573/782] Loss_D: 1.65 Loss_G: 7.37 D(x): 0.58 D(G(z)): 0.46 / 0.50 | Acc 95.3 / 88.3\n[1/2][574/782] Loss_D: 1.61 Loss_G: 8.15 D(x): 0.77 D(G(z)): 0.59 / 0.25 | Acc 93.8 / 88.3\n[1/2][575/782] Loss_D: 1.67 Loss_G: 7.28 D(x): 0.58 D(G(z)): 0.33 / 0.54 | Acc 85.9 / 88.3\n[1/2][576/782] Loss_D: 1.61 Loss_G: 8.04 D(x): 0.78 D(G(z)): 0.60 / 0.28 | Acc 95.3 / 88.3\nsemi Test: [0/157]\t Prec@1 45.312 (45.312)\nsemi Test: [50/157]\t Prec@1 51.562 (52.328)\nsemi Test: [100/157]\t Prec@1 53.125 (52.460)\nsemi Test: [150/157]\t Prec@1 40.625 (53.084)\nsemi Test Prec@1 52.89\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 54.688 (48.346)\nsup Test: [100/157]\t Prec@1 46.875 (49.025)\nsup Test: [150/157]\t Prec@1 50.000 (48.758)\nsup Test Prec@1 48.61\n[1/2][577/782] Loss_D: 1.59 Loss_G: 7.62 D(x): 0.60 D(G(z)): 0.38 / 0.43 | Acc 82.8 / 88.3\n[1/2][578/782] Loss_D: 1.64 Loss_G: 7.48 D(x): 0.65 D(G(z)): 0.51 / 0.44 | Acc 96.9 / 88.3\n[1/2][579/782] Loss_D: 1.44 Loss_G: 8.32 D(x): 0.78 D(G(z)): 0.55 / 0.21 | Acc 96.9 / 88.3\n[1/2][580/782] Loss_D: 1.67 Loss_G: 7.21 D(x): 0.49 D(G(z)): 0.28 / 0.56 | Acc 87.5 / 88.3\n[1/2][581/782] Loss_D: 1.62 Loss_G: 8.09 D(x): 0.81 D(G(z)): 0.63 / 0.27 | Acc 96.9 / 88.3\n[1/2][582/782] Loss_D: 1.34 Loss_G: 7.75 D(x): 0.62 D(G(z)): 0.37 / 0.35 | Acc 96.9 / 88.3\n[1/2][583/782] Loss_D: 1.55 Loss_G: 7.74 D(x): 0.66 D(G(z)): 0.46 / 0.41 | Acc 90.6 / 88.3\n[1/2][584/782] Loss_D: 1.40 Loss_G: 8.14 D(x): 0.77 D(G(z)): 0.51 / 0.26 | Acc 96.9 / 88.4\n[1/2][585/782] Loss_D: 1.60 Loss_G: 7.44 D(x): 0.63 D(G(z)): 0.37 / 0.52 | Acc 82.8 / 88.3\n[1/2][586/782] Loss_D: 1.67 Loss_G: 7.81 D(x): 0.73 D(G(z)): 0.59 / 0.32 | Acc 96.9 / 88.4\nsemi Test: [0/157]\t Prec@1 54.688 (54.688)\nsemi Test: [50/157]\t Prec@1 45.312 (52.022)\nsemi Test: [100/157]\t Prec@1 53.125 (52.274)\nsemi Test: [150/157]\t Prec@1 53.125 (52.721)\nsemi Test Prec@1 52.42\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 53.125 (48.591)\nsup Test: [100/157]\t Prec@1 46.875 (49.180)\nsup Test: [150/157]\t Prec@1 51.562 (48.779)\nsup Test Prec@1 48.65\n[1/2][587/782] Loss_D: 1.47 Loss_G: 7.72 D(x): 0.64 D(G(z)): 0.40 / 0.37 | Acc 89.1 / 88.4\n[1/2][588/782] Loss_D: 1.61 Loss_G: 7.41 D(x): 0.61 D(G(z)): 0.47 / 0.47 | Acc 96.9 / 88.4\n[1/2][589/782] Loss_D: 1.55 Loss_G: 8.05 D(x): 0.75 D(G(z)): 0.54 / 0.26 | Acc 93.8 / 88.4\n[1/2][590/782] Loss_D: 1.46 Loss_G: 7.61 D(x): 0.64 D(G(z)): 0.35 / 0.39 | Acc 89.1 / 88.4\n[1/2][591/782] Loss_D: 1.37 Loss_G: 8.08 D(x): 0.78 D(G(z)): 0.49 / 0.27 | Acc 96.9 / 88.4\n[1/2][592/782] Loss_D: 1.49 Loss_G: 7.35 D(x): 0.58 D(G(z)): 0.36 / 0.49 | Acc 92.2 / 88.4\n[1/2][593/782] Loss_D: 1.43 Loss_G: 8.18 D(x): 0.79 D(G(z)): 0.57 / 0.25 | Acc 100.0 / 88.4\n[1/2][594/782] Loss_D: 1.46 Loss_G: 7.23 D(x): 0.58 D(G(z)): 0.34 / 0.46 | Acc 90.6 / 88.4\n[1/2][595/782] Loss_D: 1.55 Loss_G: 7.87 D(x): 0.72 D(G(z)): 0.55 / 0.32 | Acc 93.8 / 88.4\n[1/2][596/782] Loss_D: 1.39 Loss_G: 7.74 D(x): 0.66 D(G(z)): 0.41 / 0.36 | Acc 93.8 / 88.5\nsemi Test: [0/157]\t Prec@1 46.875 (46.875)\nsemi Test: [50/157]\t Prec@1 46.875 (53.125)\nsemi Test: [100/157]\t Prec@1 48.438 (53.171)\nsemi Test: [150/157]\t Prec@1 53.125 (53.529)\nsemi Test Prec@1 53.29\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 53.125 (48.683)\nsup Test: [100/157]\t Prec@1 46.875 (49.072)\nsup Test: [150/157]\t Prec@1 53.125 (48.655)\nsup Test Prec@1 48.53\n[1/2][597/782] Loss_D: 1.25 Loss_G: 7.73 D(x): 0.72 D(G(z)): 0.44 / 0.31 | Acc 98.4 / 88.5\n[1/2][598/782] Loss_D: 1.44 Loss_G: 7.52 D(x): 0.59 D(G(z)): 0.39 / 0.42 | Acc 96.9 / 88.5\n[1/2][599/782] Loss_D: 1.49 Loss_G: 7.66 D(x): 0.68 D(G(z)): 0.51 / 0.34 | Acc 98.4 / 88.5\n[1/2][600/782] Loss_D: 1.33 Loss_G: 7.82 D(x): 0.65 D(G(z)): 0.44 / 0.33 | Acc 100.0 / 88.5\n[1/2][601/782] Loss_D: 1.49 Loss_G: 7.44 D(x): 0.63 D(G(z)): 0.43 / 0.41 | Acc 96.9 / 88.5\n[1/2][602/782] Loss_D: 1.53 Loss_G: 7.75 D(x): 0.70 D(G(z)): 0.50 / 0.35 | Acc 92.2 / 88.5\n[1/2][603/782] Loss_D: 1.32 Loss_G: 7.61 D(x): 0.67 D(G(z)): 0.42 / 0.34 | Acc 100.0 / 88.6\n[1/2][604/782] Loss_D: 1.46 Loss_G: 7.50 D(x): 0.67 D(G(z)): 0.43 / 0.43 | Acc 89.1 / 88.6\n[1/2][605/782] Loss_D: 1.56 Loss_G: 7.89 D(x): 0.71 D(G(z)): 0.54 / 0.29 | Acc 95.3 / 88.6\n[1/2][606/782] Loss_D: 1.60 Loss_G: 7.29 D(x): 0.49 D(G(z)): 0.36 / 0.53 | Acc 95.3 / 88.6\nsemi Test: [0/157]\t Prec@1 46.875 (46.875)\nsemi Test: [50/157]\t Prec@1 46.875 (52.849)\nsemi Test: [100/157]\t Prec@1 53.125 (52.645)\nsemi Test: [150/157]\t Prec@1 48.438 (53.518)\nsemi Test Prec@1 53.34\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 54.688 (48.499)\nsup Test: [100/157]\t Prec@1 46.875 (48.933)\nsup Test: [150/157]\t Prec@1 53.125 (48.758)\nsup Test Prec@1 48.61\n[1/2][607/782] Loss_D: 1.59 Loss_G: 8.09 D(x): 0.78 D(G(z)): 0.60 / 0.27 | Acc 100.0 / 88.6\n[1/2][608/782] Loss_D: 1.28 Loss_G: 7.93 D(x): 0.65 D(G(z)): 0.38 / 0.30 | Acc 98.4 / 88.6\n[1/2][609/782] Loss_D: 1.37 Loss_G: 7.25 D(x): 0.67 D(G(z)): 0.39 / 0.48 | Acc 93.8 / 88.6\n[1/2][610/782] Loss_D: 1.55 Loss_G: 8.17 D(x): 0.78 D(G(z)): 0.58 / 0.23 | Acc 96.9 / 88.6\n[1/2][611/782] Loss_D: 1.69 Loss_G: 7.18 D(x): 0.49 D(G(z)): 0.31 / 0.60 | Acc 85.9 / 88.6\n[1/2][612/782] Loss_D: 1.82 Loss_G: 7.93 D(x): 0.81 D(G(z)): 0.68 / 0.27 | Acc 95.3 / 88.6\n[1/2][613/782] Loss_D: 1.58 Loss_G: 7.37 D(x): 0.56 D(G(z)): 0.37 / 0.42 | Acc 92.2 / 88.6\n[1/2][614/782] Loss_D: 1.45 Loss_G: 7.58 D(x): 0.71 D(G(z)): 0.50 / 0.36 | Acc 95.3 / 88.7\n[1/2][615/782] Loss_D: 1.50 Loss_G: 7.47 D(x): 0.64 D(G(z)): 0.44 / 0.38 | Acc 92.2 / 88.7\n[1/2][616/782] Loss_D: 1.38 Loss_G: 7.71 D(x): 0.70 D(G(z)): 0.48 / 0.34 | Acc 96.9 / 88.7\nsemi Test: [0/157]\t Prec@1 51.562 (51.562)\nsemi Test: [50/157]\t Prec@1 51.562 (53.922)\nsemi Test: [100/157]\t Prec@1 51.562 (53.883)\nsemi Test: [150/157]\t Prec@1 46.875 (54.243)\nsemi Test Prec@1 54.04\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 53.125 (48.407)\nsup Test: [100/157]\t Prec@1 46.875 (49.025)\nsup Test: [150/157]\t Prec@1 51.562 (48.810)\nsup Test Prec@1 48.68\n[1/2][617/782] Loss_D: 1.34 Loss_G: 7.95 D(x): 0.70 D(G(z)): 0.44 / 0.31 | Acc 96.9 / 88.7\n[1/2][618/782] Loss_D: 1.66 Loss_G: 7.12 D(x): 0.55 D(G(z)): 0.38 / 0.56 | Acc 90.6 / 88.7\n[1/2][619/782] Loss_D: 1.60 Loss_G: 8.51 D(x): 0.86 D(G(z)): 0.66 / 0.18 | Acc 96.9 / 88.7\n[1/2][620/782] Loss_D: 1.75 Loss_G: 7.09 D(x): 0.44 D(G(z)): 0.24 / 0.57 | Acc 87.5 / 88.7\n[1/2][621/782] Loss_D: 1.76 Loss_G: 7.69 D(x): 0.82 D(G(z)): 0.65 / 0.33 | Acc 93.8 / 88.7\n[1/2][622/782] Loss_D: 1.53 Loss_G: 7.84 D(x): 0.67 D(G(z)): 0.44 / 0.31 | Acc 90.6 / 88.7\n[1/2][623/782] Loss_D: 1.44 Loss_G: 7.31 D(x): 0.63 D(G(z)): 0.39 / 0.45 | Acc 98.4 / 88.7\n[1/2][624/782] Loss_D: 1.37 Loss_G: 8.19 D(x): 0.81 D(G(z)): 0.54 / 0.23 | Acc 96.9 / 88.7\n[1/2][625/782] Loss_D: 1.81 Loss_G: 6.95 D(x): 0.44 D(G(z)): 0.31 / 0.63 | Acc 92.2 / 88.8\n[1/2][626/782] Loss_D: 1.80 Loss_G: 7.86 D(x): 0.81 D(G(z)): 0.69 / 0.29 | Acc 96.9 / 88.8\nsemi Test: [0/157]\t Prec@1 51.562 (51.562)\nsemi Test: [50/157]\t Prec@1 53.125 (53.064)\nsemi Test: [100/157]\t Prec@1 48.438 (52.506)\nsemi Test: [150/157]\t Prec@1 45.312 (53.135)\nsemi Test Prec@1 52.97\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 51.562 (48.346)\nsup Test: [100/157]\t Prec@1 46.875 (49.087)\nsup Test: [150/157]\t Prec@1 51.562 (48.851)\nsup Test Prec@1 48.71\n[1/2][627/782] Loss_D: 1.47 Loss_G: 7.81 D(x): 0.60 D(G(z)): 0.38 / 0.31 | Acc 92.2 / 88.8\n[1/2][628/782] Loss_D: 1.39 Loss_G: 7.39 D(x): 0.64 D(G(z)): 0.42 / 0.42 | Acc 95.3 / 88.8\n[1/2][629/782] Loss_D: 1.52 Loss_G: 7.60 D(x): 0.68 D(G(z)): 0.50 / 0.35 | Acc 95.3 / 88.8\n[1/2][630/782] Loss_D: 1.53 Loss_G: 7.39 D(x): 0.61 D(G(z)): 0.45 / 0.40 | Acc 95.3 / 88.8\n[1/2][631/782] Loss_D: 1.51 Loss_G: 7.80 D(x): 0.66 D(G(z)): 0.49 / 0.34 | Acc 96.9 / 88.8\n[1/2][632/782] Loss_D: 1.53 Loss_G: 7.51 D(x): 0.62 D(G(z)): 0.44 / 0.41 | Acc 92.2 / 88.8\n[1/2][633/782] Loss_D: 1.54 Loss_G: 7.53 D(x): 0.66 D(G(z)): 0.51 / 0.37 | Acc 98.4 / 88.8\n[1/2][634/782] Loss_D: 1.49 Loss_G: 7.83 D(x): 0.68 D(G(z)): 0.47 / 0.30 | Acc 93.8 / 88.8\n[1/2][635/782] Loss_D: 1.54 Loss_G: 7.24 D(x): 0.62 D(G(z)): 0.41 / 0.45 | Acc 92.2 / 88.8\n[1/2][636/782] Loss_D: 1.64 Loss_G: 7.72 D(x): 0.74 D(G(z)): 0.53 / 0.34 | Acc 90.6 / 88.8\nsemi Test: [0/157]\t Prec@1 56.250 (56.250)\nsemi Test: [50/157]\t Prec@1 48.438 (52.665)\nsemi Test: [100/157]\t Prec@1 45.312 (52.800)\nsemi Test: [150/157]\t Prec@1 50.000 (53.435)\nsemi Test Prec@1 53.23\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 51.562 (48.468)\nsup Test: [100/157]\t Prec@1 46.875 (48.917)\nsup Test: [150/157]\t Prec@1 53.125 (48.675)\nsup Test Prec@1 48.54\n[1/2][637/782] Loss_D: 1.40 Loss_G: 7.72 D(x): 0.68 D(G(z)): 0.44 / 0.32 | Acc 95.3 / 88.9\n[1/2][638/782] Loss_D: 1.52 Loss_G: 7.32 D(x): 0.61 D(G(z)): 0.43 / 0.45 | Acc 98.4 / 88.9\n[1/2][639/782] Loss_D: 1.41 Loss_G: 7.68 D(x): 0.74 D(G(z)): 0.51 / 0.31 | Acc 96.9 / 88.9\n[1/2][640/782] Loss_D: 1.47 Loss_G: 7.46 D(x): 0.59 D(G(z)): 0.40 / 0.38 | Acc 98.4 / 88.9\n[1/2][641/782] Loss_D: 1.48 Loss_G: 7.52 D(x): 0.70 D(G(z)): 0.49 / 0.39 | Acc 92.2 / 88.9\n[1/2][642/782] Loss_D: 1.48 Loss_G: 7.64 D(x): 0.65 D(G(z)): 0.49 / 0.36 | Acc 98.4 / 88.9\n[1/2][643/782] Loss_D: 1.41 Loss_G: 7.52 D(x): 0.69 D(G(z)): 0.45 / 0.33 | Acc 92.2 / 88.9\n[1/2][644/782] Loss_D: 1.55 Loss_G: 7.36 D(x): 0.64 D(G(z)): 0.43 / 0.42 | Acc 90.6 / 88.9\n[1/2][645/782] Loss_D: 1.52 Loss_G: 8.09 D(x): 0.73 D(G(z)): 0.54 / 0.22 | Acc 95.3 / 88.9\n[1/2][646/782] Loss_D: 1.48 Loss_G: 6.93 D(x): 0.58 D(G(z)): 0.31 / 0.51 | Acc 89.1 / 88.9\nsemi Test: [0/157]\t Prec@1 53.125 (53.125)\nsemi Test: [50/157]\t Prec@1 50.000 (53.217)\nsemi Test: [100/157]\t Prec@1 45.312 (53.295)\nsemi Test: [150/157]\t Prec@1 51.562 (53.684)\nsemi Test Prec@1 53.47\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 51.562 (48.591)\nsup Test: [100/157]\t Prec@1 46.875 (49.041)\nsup Test: [150/157]\t Prec@1 51.562 (48.769)\nsup Test Prec@1 48.67\n[1/2][647/782] Loss_D: 1.63 Loss_G: 8.03 D(x): 0.78 D(G(z)): 0.62 / 0.25 | Acc 98.4 / 89.0\n[1/2][648/782] Loss_D: 1.40 Loss_G: 7.55 D(x): 0.63 D(G(z)): 0.36 / 0.36 | Acc 87.5 / 89.0\n[1/2][649/782] Loss_D: 1.65 Loss_G: 7.30 D(x): 0.59 D(G(z)): 0.46 / 0.47 | Acc 92.2 / 89.0\n[1/2][650/782] Loss_D: 1.55 Loss_G: 8.03 D(x): 0.76 D(G(z)): 0.56 / 0.25 | Acc 96.9 / 89.0\n[1/2][651/782] Loss_D: 1.35 Loss_G: 7.48 D(x): 0.59 D(G(z)): 0.34 / 0.37 | Acc 96.9 / 89.0\n[1/2][652/782] Loss_D: 1.47 Loss_G: 7.50 D(x): 0.66 D(G(z)): 0.47 / 0.37 | Acc 96.9 / 89.0\n[1/2][653/782] Loss_D: 1.40 Loss_G: 7.74 D(x): 0.68 D(G(z)): 0.46 / 0.31 | Acc 95.3 / 89.0\n[1/2][654/782] Loss_D: 1.36 Loss_G: 7.44 D(x): 0.66 D(G(z)): 0.39 / 0.35 | Acc 93.8 / 89.0\n[1/2][655/782] Loss_D: 1.45 Loss_G: 7.61 D(x): 0.65 D(G(z)): 0.44 / 0.40 | Acc 95.3 / 89.0\n[1/2][656/782] Loss_D: 1.36 Loss_G: 8.08 D(x): 0.78 D(G(z)): 0.53 / 0.20 | Acc 96.9 / 89.0\nsemi Test: [0/157]\t Prec@1 54.688 (54.688)\nsemi Test: [50/157]\t Prec@1 50.000 (53.860)\nsemi Test: [100/157]\t Prec@1 50.000 (53.651)\nsemi Test: [150/157]\t Prec@1 59.375 (54.201)\nsemi Test Prec@1 54.06\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 53.125 (48.713)\nsup Test: [100/157]\t Prec@1 46.875 (49.257)\nsup Test: [150/157]\t Prec@1 51.562 (48.965)\nsup Test Prec@1 48.83\n[1/2][657/782] Loss_D: 1.62 Loss_G: 6.80 D(x): 0.50 D(G(z)): 0.30 / 0.56 | Acc 93.8 / 89.0\n[1/2][658/782] Loss_D: 1.68 Loss_G: 7.65 D(x): 0.77 D(G(z)): 0.63 / 0.31 | Acc 98.4 / 89.1\n[1/2][659/782] Loss_D: 1.44 Loss_G: 7.62 D(x): 0.65 D(G(z)): 0.42 / 0.33 | Acc 90.6 / 89.1\n[1/2][660/782] Loss_D: 1.43 Loss_G: 7.32 D(x): 0.64 D(G(z)): 0.42 / 0.42 | Acc 93.8 / 89.1\n[1/2][661/782] Loss_D: 1.53 Loss_G: 7.50 D(x): 0.70 D(G(z)): 0.53 / 0.31 | Acc 98.4 / 89.1\n[1/2][662/782] Loss_D: 1.48 Loss_G: 7.33 D(x): 0.61 D(G(z)): 0.39 / 0.38 | Acc 93.8 / 89.1\n[1/2][663/782] Loss_D: 1.45 Loss_G: 7.70 D(x): 0.73 D(G(z)): 0.50 / 0.31 | Acc 95.3 / 89.1\n[1/2][664/782] Loss_D: 1.39 Loss_G: 7.13 D(x): 0.61 D(G(z)): 0.39 / 0.43 | Acc 98.4 / 89.1\n[1/2][665/782] Loss_D: 1.57 Loss_G: 7.87 D(x): 0.74 D(G(z)): 0.57 / 0.27 | Acc 96.9 / 89.1\n[1/2][666/782] Loss_D: 1.47 Loss_G: 7.17 D(x): 0.58 D(G(z)): 0.37 / 0.42 | Acc 93.8 / 89.1\nsemi Test: [0/157]\t Prec@1 53.125 (53.125)\nsemi Test: [50/157]\t Prec@1 46.875 (53.462)\nsemi Test: [100/157]\t Prec@1 48.438 (53.543)\nsemi Test: [150/157]\t Prec@1 51.562 (53.632)\nsemi Test Prec@1 53.42\nsup Test: [0/157]\t Prec@1 42.188 (42.188)\nsup Test: [50/157]\t Prec@1 51.562 (48.683)\nsup Test: [100/157]\t Prec@1 46.875 (49.134)\nsup Test: [150/157]\t Prec@1 53.125 (48.913)\nsup Test Prec@1 48.77\n[1/2][667/782] Loss_D: 1.49 Loss_G: 7.68 D(x): 0.70 D(G(z)): 0.53 / 0.33 | Acc 98.4 / 89.1\n[1/2][668/782] Loss_D: 1.46 Loss_G: 7.51 D(x): 0.65 D(G(z)): 0.42 / 0.35 | Acc 93.8 / 89.1\n[1/2][669/782] Loss_D: 1.36 Loss_G: 7.66 D(x): 0.70 D(G(z)): 0.45 / 0.33 | Acc 98.4 / 89.2\n[1/2][670/782] Loss_D: 1.66 Loss_G: 7.18 D(x): 0.61 D(G(z)): 0.44 / 0.41 | Acc 90.6 / 89.2\n[1/2][671/782] Loss_D: 1.42 Loss_G: 8.01 D(x): 0.77 D(G(z)): 0.52 / 0.23 | Acc 93.8 / 89.2\n[1/2][672/782] Loss_D: 1.61 Loss_G: 6.93 D(x): 0.51 D(G(z)): 0.34 / 0.56 | Acc 90.6 / 89.2\n[1/2][673/782] Loss_D: 1.76 Loss_G: 8.02 D(x): 0.78 D(G(z)): 0.65 / 0.27 | Acc 93.8 / 89.2\n[1/2][674/782] Loss_D: 1.25 Loss_G: 7.75 D(x): 0.68 D(G(z)): 0.38 / 0.30 | Acc 96.9 / 89.2\n[1/2][675/782] Loss_D: 1.58 Loss_G: 7.00 D(x): 0.60 D(G(z)): 0.39 / 0.51 | Acc 85.9 / 89.2\n[1/2][676/782] Loss_D: 1.60 Loss_G: 8.02 D(x): 0.79 D(G(z)): 0.62 / 0.22 | Acc 100.0 / 89.2\nsemi Test: [0/157]\t Prec@1 53.125 (53.125)\nsemi Test: [50/157]\t Prec@1 46.875 (53.002)\nsemi Test: [100/157]\t Prec@1 50.000 (53.156)\nsemi Test: [150/157]\t Prec@1 51.562 (53.518)\nsemi Test Prec@1 53.39\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 50.000 (48.805)\nsup Test: [100/157]\t Prec@1 46.875 (49.196)\nsup Test: [150/157]\t Prec@1 53.125 (48.955)\nsup Test Prec@1 48.83\n[1/2][677/782] Loss_D: 1.57 Loss_G: 7.09 D(x): 0.54 D(G(z)): 0.33 / 0.50 | Acc 92.2 / 89.2\n[1/2][678/782] Loss_D: 1.52 Loss_G: 7.81 D(x): 0.80 D(G(z)): 0.58 / 0.26 | Acc 96.9 / 89.2\n[1/2][679/782] Loss_D: 1.34 Loss_G: 7.40 D(x): 0.61 D(G(z)): 0.35 / 0.36 | Acc 96.9 / 89.2\n[1/2][680/782] Loss_D: 1.68 Loss_G: 7.07 D(x): 0.60 D(G(z)): 0.47 / 0.51 | Acc 93.8 / 89.2\n[1/2][681/782] Loss_D: 1.48 Loss_G: 7.90 D(x): 0.77 D(G(z)): 0.58 / 0.23 | Acc 98.4 / 89.2\n[1/2][682/782] Loss_D: 1.42 Loss_G: 7.17 D(x): 0.58 D(G(z)): 0.33 / 0.46 | Acc 90.6 / 89.3\n[1/2][683/782] Loss_D: 1.57 Loss_G: 7.45 D(x): 0.71 D(G(z)): 0.56 / 0.35 | Acc 96.9 / 89.3\n[1/2][684/782] Loss_D: 1.44 Loss_G: 7.61 D(x): 0.69 D(G(z)): 0.46 / 0.30 | Acc 93.8 / 89.3\n[1/2][685/782] Loss_D: 1.73 Loss_G: 6.88 D(x): 0.54 D(G(z)): 0.41 / 0.56 | Acc 90.6 / 89.3\n[1/2][686/782] Loss_D: 1.69 Loss_G: 7.96 D(x): 0.82 D(G(z)): 0.65 / 0.21 | Acc 98.4 / 89.3\nsemi Test: [0/157]\t Prec@1 46.875 (46.875)\nsemi Test: [50/157]\t Prec@1 51.562 (54.963)\nsemi Test: [100/157]\t Prec@1 50.000 (54.533)\nsemi Test: [150/157]\t Prec@1 50.000 (54.667)\nsemi Test Prec@1 54.41\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 50.000 (48.499)\nsup Test: [100/157]\t Prec@1 46.875 (49.041)\nsup Test: [150/157]\t Prec@1 53.125 (48.738)\nsup Test Prec@1 48.62\n[1/2][687/782] Loss_D: 1.64 Loss_G: 7.02 D(x): 0.56 D(G(z)): 0.31 / 0.50 | Acc 82.8 / 89.3\n[1/2][688/782] Loss_D: 1.68 Loss_G: 7.42 D(x): 0.72 D(G(z)): 0.59 / 0.36 | Acc 96.9 / 89.3\n[1/2][689/782] Loss_D: 1.41 Loss_G: 7.71 D(x): 0.66 D(G(z)): 0.45 / 0.30 | Acc 95.3 / 89.3\n[1/2][690/782] Loss_D: 1.35 Loss_G: 7.24 D(x): 0.64 D(G(z)): 0.37 / 0.39 | Acc 92.2 / 89.3\n[1/2][691/782] Loss_D: 1.59 Loss_G: 7.30 D(x): 0.67 D(G(z)): 0.51 / 0.40 | Acc 95.3 / 89.3\n[1/2][692/782] Loss_D: 1.43 Loss_G: 7.66 D(x): 0.70 D(G(z)): 0.49 / 0.31 | Acc 96.9 / 89.3\n[1/2][693/782] Loss_D: 1.46 Loss_G: 7.11 D(x): 0.61 D(G(z)): 0.40 / 0.42 | Acc 93.8 / 89.3\n[1/2][694/782] Loss_D: 1.38 Loss_G: 7.74 D(x): 0.76 D(G(z)): 0.51 / 0.27 | Acc 98.4 / 89.3\n[1/2][695/782] Loss_D: 1.42 Loss_G: 7.06 D(x): 0.64 D(G(z)): 0.38 / 0.41 | Acc 90.6 / 89.3\n[1/2][696/782] Loss_D: 1.49 Loss_G: 7.38 D(x): 0.69 D(G(z)): 0.49 / 0.36 | Acc 92.2 / 89.3\nsemi Test: [0/157]\t Prec@1 46.875 (46.875)\nsemi Test: [50/157]\t Prec@1 54.688 (54.228)\nsemi Test: [100/157]\t Prec@1 43.750 (53.837)\nsemi Test: [150/157]\t Prec@1 51.562 (54.005)\nsemi Test Prec@1 53.76\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 53.125 (48.621)\nsup Test: [100/157]\t Prec@1 46.875 (49.118)\nsup Test: [150/157]\t Prec@1 53.125 (48.841)\nsup Test Prec@1 48.73\n[1/2][697/782] Loss_D: 1.60 Loss_G: 7.23 D(x): 0.61 D(G(z)): 0.48 / 0.40 | Acc 93.8 / 89.3\n[1/2][698/782] Loss_D: 1.39 Loss_G: 7.60 D(x): 0.71 D(G(z)): 0.46 / 0.32 | Acc 93.8 / 89.4\n[1/2][699/782] Loss_D: 1.49 Loss_G: 7.09 D(x): 0.62 D(G(z)): 0.43 / 0.41 | Acc 95.3 / 89.4\n[1/2][700/782] Loss_D: 1.59 Loss_G: 7.28 D(x): 0.67 D(G(z)): 0.48 / 0.39 | Acc 90.6 / 89.4\n[1/2][701/782] Loss_D: 1.35 Loss_G: 7.64 D(x): 0.73 D(G(z)): 0.48 / 0.26 | Acc 100.0 / 89.4\n[1/2][702/782] Loss_D: 1.41 Loss_G: 7.07 D(x): 0.58 D(G(z)): 0.34 / 0.47 | Acc 89.1 / 89.4\n[1/2][703/782] Loss_D: 1.51 Loss_G: 7.36 D(x): 0.75 D(G(z)): 0.54 / 0.34 | Acc 96.9 / 89.4\n[1/2][704/782] Loss_D: 1.41 Loss_G: 7.56 D(x): 0.65 D(G(z)): 0.42 / 0.36 | Acc 93.8 / 89.4\n[1/2][705/782] Loss_D: 1.47 Loss_G: 7.34 D(x): 0.66 D(G(z)): 0.46 / 0.36 | Acc 100.0 / 89.4\n[1/2][706/782] Loss_D: 1.54 Loss_G: 7.15 D(x): 0.61 D(G(z)): 0.46 / 0.39 | Acc 95.3 / 89.4\nsemi Test: [0/157]\t Prec@1 46.875 (46.875)\nsemi Test: [50/157]\t Prec@1 54.688 (53.523)\nsemi Test: [100/157]\t Prec@1 40.625 (52.877)\nsemi Test: [150/157]\t Prec@1 48.438 (53.291)\nsemi Test Prec@1 53.15\nsup Test: [0/157]\t Prec@1 39.062 (39.062)\nsup Test: [50/157]\t Prec@1 53.125 (48.683)\nsup Test: [100/157]\t Prec@1 46.875 (49.149)\nsup Test: [150/157]\t Prec@1 53.125 (48.903)\nsup Test Prec@1 48.79\n[1/2][707/782] Loss_D: 1.43 Loss_G: 7.60 D(x): 0.74 D(G(z)): 0.50 / 0.29 | Acc 93.8 / 89.4\n[1/2][708/782] Loss_D: 1.52 Loss_G: 6.86 D(x): 0.58 D(G(z)): 0.36 / 0.52 | Acc 89.1 / 89.4\n[1/2][709/782] Loss_D: 1.64 Loss_G: 7.70 D(x): 0.73 D(G(z)): 0.58 / 0.29 | Acc 98.4 / 89.4\n[1/2][710/782] Loss_D: 1.30 Loss_G: 7.54 D(x): 0.67 D(G(z)): 0.38 / 0.29 | Acc 93.8 / 89.4\n[1/2][711/782] Loss_D: 1.40 Loss_G: 7.03 D(x): 0.67 D(G(z)): 0.38 / 0.47 | Acc 92.2 / 89.4\n[1/2][712/782] Loss_D: 1.72 Loss_G: 7.51 D(x): 0.69 D(G(z)): 0.60 / 0.32 | Acc 96.9 / 89.5\n[1/2][713/782] Loss_D: 1.48 Loss_G: 7.29 D(x): 0.63 D(G(z)): 0.40 / 0.42 | Acc 89.1 / 89.5\n[1/2][714/782] Loss_D: 1.34 Loss_G: 7.36 D(x): 0.73 D(G(z)): 0.49 / 0.33 | Acc 98.4 / 89.5\n[1/2][715/782] Loss_D: 1.43 Loss_G: 7.48 D(x): 0.70 D(G(z)): 0.44 / 0.31 | Acc 92.2 / 89.5\n[1/2][716/782] Loss_D: 1.55 Loss_G: 6.88 D(x): 0.55 D(G(z)): 0.39 / 0.52 | Acc 100.0 / 89.5\nsemi Test: [0/157]\t Prec@1 56.250 (56.250)\nsemi Test: [50/157]\t Prec@1 60.938 (53.676)\nsemi Test: [100/157]\t Prec@1 50.000 (53.094)\nsemi Test: [150/157]\t Prec@1 50.000 (53.611)\nsemi Test Prec@1 53.48\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 53.125 (48.652)\nsup Test: [100/157]\t Prec@1 46.875 (49.041)\nsup Test: [150/157]\t Prec@1 53.125 (48.862)\nsup Test Prec@1 48.74\n[1/2][717/782] Loss_D: 1.59 Loss_G: 7.79 D(x): 0.80 D(G(z)): 0.60 / 0.25 | Acc 93.8 / 89.5\n[1/2][718/782] Loss_D: 1.38 Loss_G: 7.05 D(x): 0.57 D(G(z)): 0.32 / 0.44 | Acc 92.2 / 89.5\n[1/2][719/782] Loss_D: 1.46 Loss_G: 7.53 D(x): 0.74 D(G(z)): 0.54 / 0.31 | Acc 96.9 / 89.5\n[1/2][720/782] Loss_D: 1.64 Loss_G: 6.75 D(x): 0.53 D(G(z)): 0.40 / 0.56 | Acc 96.9 / 89.5\n[1/2][721/782] Loss_D: 1.56 Loss_G: 7.71 D(x): 0.79 D(G(z)): 0.61 / 0.24 | Acc 95.3 / 89.5\n[1/2][722/782] Loss_D: 1.30 Loss_G: 7.44 D(x): 0.65 D(G(z)): 0.33 / 0.32 | Acc 92.2 / 89.5\n[1/2][723/782] Loss_D: 1.46 Loss_G: 6.86 D(x): 0.61 D(G(z)): 0.39 / 0.54 | Acc 96.9 / 89.5\n[1/2][724/782] Loss_D: 1.59 Loss_G: 7.64 D(x): 0.78 D(G(z)): 0.62 / 0.24 | Acc 96.9 / 89.6\n[1/2][725/782] Loss_D: 1.48 Loss_G: 6.92 D(x): 0.55 D(G(z)): 0.31 / 0.49 | Acc 90.6 / 89.6\n[1/2][726/782] Loss_D: 1.49 Loss_G: 7.22 D(x): 0.77 D(G(z)): 0.56 / 0.32 | Acc 93.8 / 89.6\nsemi Test: [0/157]\t Prec@1 56.250 (56.250)\nsemi Test: [50/157]\t Prec@1 51.562 (53.707)\nsemi Test: [100/157]\t Prec@1 56.250 (53.605)\nsemi Test: [150/157]\t Prec@1 50.000 (53.901)\nsemi Test Prec@1 53.80\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 56.250 (48.560)\nsup Test: [100/157]\t Prec@1 46.875 (49.072)\nsup Test: [150/157]\t Prec@1 51.562 (48.820)\nsup Test Prec@1 48.70\n[1/2][727/782] Loss_D: 1.45 Loss_G: 7.29 D(x): 0.66 D(G(z)): 0.43 / 0.34 | Acc 90.6 / 89.6\n[1/2][728/782] Loss_D: 1.62 Loss_G: 6.68 D(x): 0.57 D(G(z)): 0.41 / 0.52 | Acc 92.2 / 89.6\n[1/2][729/782] Loss_D: 1.52 Loss_G: 7.68 D(x): 0.80 D(G(z)): 0.60 / 0.24 | Acc 100.0 / 89.6\n[1/2][730/782] Loss_D: 1.51 Loss_G: 6.77 D(x): 0.53 D(G(z)): 0.29 / 0.51 | Acc 89.1 / 89.6\n[1/2][731/782] Loss_D: 1.41 Loss_G: 7.50 D(x): 0.82 D(G(z)): 0.57 / 0.28 | Acc 96.9 / 89.6\n[1/2][732/782] Loss_D: 1.65 Loss_G: 6.77 D(x): 0.56 D(G(z)): 0.36 / 0.52 | Acc 85.9 / 89.6\n[1/2][733/782] Loss_D: 1.56 Loss_G: 7.51 D(x): 0.78 D(G(z)): 0.60 / 0.29 | Acc 95.3 / 89.6\n[1/2][734/782] Loss_D: 1.59 Loss_G: 6.88 D(x): 0.57 D(G(z)): 0.35 / 0.50 | Acc 87.5 / 89.6\n[1/2][735/782] Loss_D: 1.58 Loss_G: 7.23 D(x): 0.72 D(G(z)): 0.57 / 0.36 | Acc 98.4 / 89.6\n[1/2][736/782] Loss_D: 1.43 Loss_G: 7.57 D(x): 0.69 D(G(z)): 0.46 / 0.27 | Acc 96.9 / 89.6\nsemi Test: [0/157]\t Prec@1 48.438 (48.438)\nsemi Test: [50/157]\t Prec@1 54.688 (53.217)\nsemi Test: [100/157]\t Prec@1 45.312 (53.063)\nsemi Test: [150/157]\t Prec@1 56.250 (53.404)\nsemi Test Prec@1 53.22\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 53.125 (48.683)\nsup Test: [100/157]\t Prec@1 46.875 (49.149)\nsup Test: [150/157]\t Prec@1 53.125 (48.924)\nsup Test Prec@1 48.82\n[1/2][737/782] Loss_D: 1.47 Loss_G: 6.77 D(x): 0.57 D(G(z)): 0.33 / 0.51 | Acc 92.2 / 89.6\n[1/2][738/782] Loss_D: 1.48 Loss_G: 7.47 D(x): 0.77 D(G(z)): 0.58 / 0.28 | Acc 98.4 / 89.6\n[1/2][739/782] Loss_D: 1.39 Loss_G: 7.27 D(x): 0.58 D(G(z)): 0.37 / 0.37 | Acc 96.9 / 89.6\n[1/2][740/782] Loss_D: 1.48 Loss_G: 7.03 D(x): 0.63 D(G(z)): 0.42 / 0.47 | Acc 92.2 / 89.6\n[1/2][741/782] Loss_D: 1.46 Loss_G: 7.63 D(x): 0.80 D(G(z)): 0.57 / 0.25 | Acc 98.4 / 89.7\n[1/2][742/782] Loss_D: 1.47 Loss_G: 7.02 D(x): 0.55 D(G(z)): 0.33 / 0.42 | Acc 96.9 / 89.7\n[1/2][743/782] Loss_D: 1.55 Loss_G: 7.29 D(x): 0.68 D(G(z)): 0.51 / 0.37 | Acc 98.4 / 89.7\n[1/2][744/782] Loss_D: 1.46 Loss_G: 7.26 D(x): 0.66 D(G(z)): 0.47 / 0.37 | Acc 95.3 / 89.7\n[1/2][745/782] Loss_D: 1.44 Loss_G: 7.08 D(x): 0.64 D(G(z)): 0.45 / 0.39 | Acc 92.2 / 89.7\n[1/2][746/782] Loss_D: 1.48 Loss_G: 7.07 D(x): 0.66 D(G(z)): 0.47 / 0.42 | Acc 96.9 / 89.7\nsemi Test: [0/157]\t Prec@1 56.250 (56.250)\nsemi Test: [50/157]\t Prec@1 50.000 (54.657)\nsemi Test: [100/157]\t Prec@1 53.125 (54.131)\nsemi Test: [150/157]\t Prec@1 54.688 (53.870)\nsemi Test Prec@1 53.67\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 53.125 (48.805)\nsup Test: [100/157]\t Prec@1 46.875 (49.273)\nsup Test: [150/157]\t Prec@1 53.125 (48.986)\nsup Test Prec@1 48.83\n[1/2][747/782] Loss_D: 1.42 Loss_G: 7.44 D(x): 0.74 D(G(z)): 0.52 / 0.28 | Acc 95.3 / 89.7\n[1/2][748/782] Loss_D: 1.57 Loss_G: 6.76 D(x): 0.55 D(G(z)): 0.33 / 0.53 | Acc 87.5 / 89.7\n[1/2][749/782] Loss_D: 1.49 Loss_G: 7.95 D(x): 0.85 D(G(z)): 0.59 / 0.17 | Acc 96.9 / 89.7\n[1/2][750/782] Loss_D: 1.51 Loss_G: 6.49 D(x): 0.54 D(G(z)): 0.27 / 0.60 | Acc 92.2 / 89.7\n[1/2][751/782] Loss_D: 1.72 Loss_G: 7.62 D(x): 0.83 D(G(z)): 0.66 / 0.23 | Acc 95.3 / 89.7\n[1/2][752/782] Loss_D: 1.41 Loss_G: 7.09 D(x): 0.59 D(G(z)): 0.32 / 0.40 | Acc 92.2 / 89.7\n[1/2][753/782] Loss_D: 1.50 Loss_G: 6.91 D(x): 0.68 D(G(z)): 0.48 / 0.43 | Acc 95.3 / 89.7\n[1/2][754/782] Loss_D: 1.36 Loss_G: 7.70 D(x): 0.79 D(G(z)): 0.53 / 0.24 | Acc 100.0 / 89.7\n[1/2][755/782] Loss_D: 1.49 Loss_G: 6.83 D(x): 0.54 D(G(z)): 0.31 / 0.48 | Acc 93.8 / 89.7\n[1/2][756/782] Loss_D: 1.44 Loss_G: 7.45 D(x): 0.77 D(G(z)): 0.57 / 0.28 | Acc 100.0 / 89.8\nsemi Test: [0/157]\t Prec@1 57.812 (57.812)\nsemi Test: [50/157]\t Prec@1 54.688 (53.830)\nsemi Test: [100/157]\t Prec@1 54.688 (54.162)\nsemi Test: [150/157]\t Prec@1 51.562 (54.481)\nsemi Test Prec@1 54.30\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 54.688 (48.836)\nsup Test: [100/157]\t Prec@1 46.875 (49.226)\nsup Test: [150/157]\t Prec@1 51.562 (48.913)\nsup Test Prec@1 48.79\n[1/2][757/782] Loss_D: 1.32 Loss_G: 7.03 D(x): 0.62 D(G(z)): 0.36 / 0.37 | Acc 96.9 / 89.8\n[1/2][758/782] Loss_D: 1.54 Loss_G: 7.07 D(x): 0.63 D(G(z)): 0.47 / 0.44 | Acc 96.9 / 89.8\n[1/2][759/782] Loss_D: 1.62 Loss_G: 7.08 D(x): 0.68 D(G(z)): 0.52 / 0.40 | Acc 96.9 / 89.8\n[1/2][760/782] Loss_D: 1.48 Loss_G: 7.39 D(x): 0.68 D(G(z)): 0.49 / 0.29 | Acc 95.3 / 89.8\n[1/2][761/782] Loss_D: 1.49 Loss_G: 6.84 D(x): 0.61 D(G(z)): 0.38 / 0.47 | Acc 90.6 / 89.8\n[1/2][762/782] Loss_D: 1.50 Loss_G: 7.80 D(x): 0.84 D(G(z)): 0.58 / 0.22 | Acc 93.8 / 89.8\n[1/2][763/782] Loss_D: 1.68 Loss_G: 6.74 D(x): 0.46 D(G(z)): 0.32 / 0.57 | Acc 93.8 / 89.8\n[1/2][764/782] Loss_D: 1.52 Loss_G: 7.48 D(x): 0.81 D(G(z)): 0.62 / 0.26 | Acc 100.0 / 89.8\n[1/2][765/782] Loss_D: 1.68 Loss_G: 6.77 D(x): 0.51 D(G(z)): 0.37 / 0.50 | Acc 87.5 / 89.8\n[1/2][766/782] Loss_D: 1.61 Loss_G: 7.34 D(x): 0.76 D(G(z)): 0.59 / 0.30 | Acc 95.3 / 89.8\nsemi Test: [0/157]\t Prec@1 53.125 (53.125)\nsemi Test: [50/157]\t Prec@1 51.562 (53.094)\nsemi Test: [100/157]\t Prec@1 46.875 (53.233)\nsemi Test: [150/157]\t Prec@1 53.125 (53.260)\nsemi Test Prec@1 53.12\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 56.250 (49.050)\nsup Test: [100/157]\t Prec@1 46.875 (49.319)\nsup Test: [150/157]\t Prec@1 51.562 (49.048)\nsup Test Prec@1 48.93\n[1/2][767/782] Loss_D: 1.38 Loss_G: 7.35 D(x): 0.64 D(G(z)): 0.39 / 0.34 | Acc 87.5 / 89.8\n[1/2][768/782] Loss_D: 1.44 Loss_G: 6.69 D(x): 0.63 D(G(z)): 0.43 / 0.45 | Acc 96.9 / 89.8\n[1/2][769/782] Loss_D: 1.36 Loss_G: 7.56 D(x): 0.79 D(G(z)): 0.53 / 0.24 | Acc 98.4 / 89.8\n[1/2][770/782] Loss_D: 1.46 Loss_G: 6.89 D(x): 0.57 D(G(z)): 0.34 / 0.50 | Acc 96.9 / 89.9\n[1/2][771/782] Loss_D: 1.51 Loss_G: 7.39 D(x): 0.78 D(G(z)): 0.58 / 0.30 | Acc 96.9 / 89.9\n[1/2][772/782] Loss_D: 1.40 Loss_G: 7.08 D(x): 0.64 D(G(z)): 0.37 / 0.39 | Acc 92.2 / 89.9\n[1/2][773/782] Loss_D: 1.57 Loss_G: 6.95 D(x): 0.62 D(G(z)): 0.47 / 0.47 | Acc 93.8 / 89.9\n[1/2][774/782] Loss_D: 1.39 Loss_G: 7.88 D(x): 0.80 D(G(z)): 0.57 / 0.20 | Acc 100.0 / 89.9\n[1/2][775/782] Loss_D: 1.84 Loss_G: 6.39 D(x): 0.50 D(G(z)): 0.28 / 0.65 | Acc 73.4 / 89.9\n[1/2][776/782] Loss_D: 1.82 Loss_G: 7.46 D(x): 0.82 D(G(z)): 0.69 / 0.30 | Acc 95.3 / 89.9\nsemi Test: [0/157]\t Prec@1 57.812 (57.812)\nsemi Test: [50/157]\t Prec@1 43.750 (52.083)\nsemi Test: [100/157]\t Prec@1 51.562 (51.176)\nsemi Test: [150/157]\t Prec@1 50.000 (51.242)\nsemi Test Prec@1 51.08\nsup Test: [0/157]\t Prec@1 40.625 (40.625)\nsup Test: [50/157]\t Prec@1 56.250 (48.744)\nsup Test: [100/157]\t Prec@1 46.875 (49.196)\nsup Test: [150/157]\t Prec@1 53.125 (48.934)\nsup Test Prec@1 48.82\n[1/2][777/782] Loss_D: 1.43 Loss_G: 7.33 D(x): 0.62 D(G(z)): 0.40 / 0.30 | Acc 93.8 / 89.9\n[1/2][778/782] Loss_D: 1.34 Loss_G: 6.93 D(x): 0.63 D(G(z)): 0.39 / 0.41 | Acc 98.4 / 89.9\n[1/2][779/782] Loss_D: 1.58 Loss_G: 6.95 D(x): 0.65 D(G(z)): 0.51 / 0.40 | Acc 96.9 / 89.9\n[1/2][780/782] Loss_D: 1.47 Loss_G: 7.31 D(x): 0.68 D(G(z)): 0.48 / 0.32 | Acc 93.8 / 89.9\n[1/2][781/782] Loss_D: 1.67 Loss_G: 6.75 D(x): 0.51 D(G(z)): 0.39 / 0.47 | Acc 92.2 / 89.9\n"
],
[
"from tensorflow.python.summary import event_accumulator\nimport pandas as pd\nfrom plotnine import *\nea = event_accumulator.EventAccumulator(opt.outf)\nea.Reload()",
"INFO:tensorflow:Directory watcher advancing from modelfiles/pytorch_demo3/events.out.tfevents.1512681730.en-cs-nikola-compute01 to modelfiles/pytorch_demo3/events.out.tfevents.1512681765.en-cs-nikola-compute01\n"
],
[
"_df1 = pd.DataFrame(ea.Scalars('test_acc_semi'))\n_df2 = pd.DataFrame(ea.Scalars('test_acc_sup'))\ndf = pd.DataFrame()\ndf['Iteration'] = pd.concat([_df1['step'], _df2['step']])\ndf['Accuracy'] = pd.concat([_df1['value'], _df2['value']])\ndf['Classification'] = ['BayesGAN']*len(_df1['step']) + ['Baseline']*len(_df2['step'])",
"_____no_output_____"
]
],
[
[
"The results show that the Bayesian discriminator trained with the Bayesian generator outperforms the discriminator trained on partial data.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline",
"_____no_output_____"
],
[
"p = ggplot(df, aes(x='Iteration', y='Accuracy', color='Classification', label='Classification')) + geom_point(size=0.5)\nprint(p)",
"_____no_output_____"
]
],
[
[
"After training for 50 epochs, below are the samples generator by four different parameters $\\theta_g$'s. Note that different parameters tend to have different artistic styles.\n\n\n\n\n",
"_____no_output_____"
],
[
"Note: This code is adapted from the implementation by Saatchai and Wilson in Tensorflow (https://github.com/andrewgordonwilson/bayesgan) and the DCGAN code from Pytorch examples (https://github.com/pytorch/examples/tree/master/dcgan).",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d0c1faba66c96e1b367628b7986893260f30723e | 107,739 | ipynb | Jupyter Notebook | California_Medicaid_Eligibility/Electronic_Health_Record_Program_Payments_Data_Cleaning.ipynb | LastAncientOne/Kaggle-Project | 06e0b1c9c7b62323ec978c491c2c57037b0832f6 | [
"MIT"
] | 12 | 2020-05-03T10:24:24.000Z | 2021-09-28T21:05:22.000Z | California_Medicaid_Eligibility/Electronic_Health_Record_Program_Payments_Data_Cleaning.ipynb | LastAncientOne/Kaggle-Project | 06e0b1c9c7b62323ec978c491c2c57037b0832f6 | [
"MIT"
] | null | null | null | California_Medicaid_Eligibility/Electronic_Health_Record_Program_Payments_Data_Cleaning.ipynb | LastAncientOne/Kaggle-Project | 06e0b1c9c7b62323ec978c491c2c57037b0832f6 | [
"MIT"
] | 14 | 2019-08-20T14:50:43.000Z | 2022-02-28T13:39:26.000Z | 118.005476 | 7,497 | 0.457624 | [
[
[
"# Electronic Health Record (EHR) Incentive Program Payments for Eligible Providers ",
"_____no_output_____"
],
[
"## Data Cleaning\n\nhttps://data.chhs.ca.gov/dataset/electronic-health-record-ehr-incentive-program-payments-for-eligible-providers3",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np",
"_____no_output_____"
],
[
"ehr_data = pd.read_csv(\"Electronic_Health_Record_Program_Payments.csv\")",
"_____no_output_____"
],
[
"ehr_data.head()",
"_____no_output_____"
],
[
"ehr_data.tail()",
"_____no_output_____"
],
[
"ehr_data.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 25691 entries, 0 to 25690\nData columns (total 22 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 X 25691 non-null float64\n 1 Y 25691 non-null float64\n 2 OBJECTID 25691 non-null int64 \n 3 Provider_Name 25691 non-null object \n 4 NPI 25691 non-null int64 \n 5 Medicaid_EP_Hospital_Type 25691 non-null object \n 6 Specialty 25691 non-null object \n 7 Business_Street_Address 25691 non-null object \n 8 Business_City 25691 non-null object \n 9 Business_County 25691 non-null object \n 10 Business_ZIP_Code 25691 non-null int64 \n 11 Business_State_Territory 25691 non-null object \n 12 Program_Year 25691 non-null int64 \n 13 Payment_Year 25691 non-null int64 \n 14 Payment_Year_Number 25691 non-null int64 \n 15 Payment_Criteria_Medicaid 25691 non-null object \n 16 Payee_Name 25691 non-null object \n 17 Payee_NPI 25691 non-null int64 \n 18 total_payments 25691 non-null float64\n 19 total_recent_payments 25691 non-null float64\n 20 Latitude 25691 non-null float64\n 21 Longitude 25691 non-null float64\ndtypes: float64(6), int64(7), object(9)\nmemory usage: 4.3+ MB\n"
],
[
"print(\"Exploratory Data Analysis\")\nprint(\"Electronic Health Record Data\")\nprint('-'*40)\nprint(\"Dataset information\") \nprint(ehr_data.info(memory_usage='deep',verbose=False))\nprint('-'*40)\nprint(ehr_data.info())\nprint('-'*40)\nprint(\"Data type:\")\nprint(ehr_data.dtypes)\nprint('-'*40)\nprint(\"Check unique values without NaN\")\nprint(ehr_data.nunique())\nprint('-'*40)\nprint(\"Data shape:\")\nprint(ehr_data.shape)\nprint('-'*40)\nprint(\"Data columns Names:\")\nprint(ehr_data.columns)\nprint('-'*40)\nprint(\"Check for NaNs:\")\nprint(ehr_data.isnull().values.any())\nprint('-'*40)\nprint(\"How many NaN it has in each columns?\")\nprint(ehr_data.isnull().sum())\nprint('-'*40)\nprint(\"Data Statistics Summary:\")\nprint(ehr_data.describe())\nprint('\\n')",
"Exploratory Data Analysis\nElectronic Health Record Data\n----------------------------------------\nDataset information\n<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 25691 entries, 0 to 25690\nColumns: 22 entries, X to Longitude\ndtypes: float64(6), int64(7), object(9)\nmemory usage: 18.0 MB\nNone\n----------------------------------------\n<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 25691 entries, 0 to 25690\nData columns (total 22 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 X 25691 non-null float64\n 1 Y 25691 non-null float64\n 2 OBJECTID 25691 non-null int64 \n 3 Provider_Name 25691 non-null object \n 4 NPI 25691 non-null int64 \n 5 Medicaid_EP_Hospital_Type 25691 non-null object \n 6 Specialty 25691 non-null object \n 7 Business_Street_Address 25691 non-null object \n 8 Business_City 25691 non-null object \n 9 Business_County 25691 non-null object \n 10 Business_ZIP_Code 25691 non-null int64 \n 11 Business_State_Territory 25691 non-null object \n 12 Program_Year 25691 non-null int64 \n 13 Payment_Year 25691 non-null int64 \n 14 Payment_Year_Number 25691 non-null int64 \n 15 Payment_Criteria_Medicaid 25691 non-null object \n 16 Payee_Name 25691 non-null object \n 17 Payee_NPI 25691 non-null int64 \n 18 total_payments 25691 non-null float64\n 19 total_recent_payments 25691 non-null float64\n 20 Latitude 25691 non-null float64\n 21 Longitude 25691 non-null float64\ndtypes: float64(6), int64(7), object(9)\nmemory usage: 4.3+ MB\nNone\n----------------------------------------\nData type:\nX float64\nY float64\nOBJECTID int64\nProvider_Name object\nNPI int64\nMedicaid_EP_Hospital_Type object\nSpecialty object\nBusiness_Street_Address object\nBusiness_City object\nBusiness_County object\nBusiness_ZIP_Code int64\nBusiness_State_Territory object\nProgram_Year int64\nPayment_Year int64\nPayment_Year_Number int64\nPayment_Criteria_Medicaid object\nPayee_Name object\nPayee_NPI int64\ntotal_payments float64\ntotal_recent_payments float64\nLatitude float64\nLongitude float64\ndtype: object\n----------------------------------------\nCheck unique values without NaN\nX 5590\nY 5577\nOBJECTID 25691\nProvider_Name 25344\nNPI 25691\nMedicaid_EP_Hospital_Type 6\nSpecialty 65\nBusiness_Street_Address 10270\nBusiness_City 955\nBusiness_County 59\nBusiness_ZIP_Code 1075\nBusiness_State_Territory 1\nProgram_Year 10\nPayment_Year 9\nPayment_Year_Number 6\nPayment_Criteria_Medicaid 2\nPayee_Name 3995\nPayee_NPI 6272\ntotal_payments 64\ntotal_recent_payments 10\nLatitude 5577\nLongitude 5590\ndtype: int64\n----------------------------------------\nData shape:\n(25691, 22)\n----------------------------------------\nData columns Names:\nIndex(['X', 'Y', 'OBJECTID', 'Provider_Name', 'NPI',\n 'Medicaid_EP_Hospital_Type', 'Specialty', 'Business_Street_Address',\n 'Business_City', 'Business_County', 'Business_ZIP_Code',\n 'Business_State_Territory', 'Program_Year', 'Payment_Year',\n 'Payment_Year_Number', 'Payment_Criteria_Medicaid', 'Payee_Name',\n 'Payee_NPI', 'total_payments', 'total_recent_payments', 'Latitude',\n 'Longitude'],\n dtype='object')\n----------------------------------------\nCheck for NaNs:\nFalse\n----------------------------------------\nHow many NaN it has in each columns?\nX 0\nY 0\nOBJECTID 0\nProvider_Name 0\nNPI 0\nMedicaid_EP_Hospital_Type 0\nSpecialty 0\nBusiness_Street_Address 0\nBusiness_City 0\nBusiness_County 0\nBusiness_ZIP_Code 0\nBusiness_State_Territory 0\nProgram_Year 0\nPayment_Year 0\nPayment_Year_Number 0\nPayment_Criteria_Medicaid 0\nPayee_Name 0\nPayee_NPI 0\ntotal_payments 0\ntotal_recent_payments 0\nLatitude 0\nLongitude 0\ndtype: int64\n----------------------------------------\nData Statistics Summary:\n X Y OBJECTID NPI \\\ncount 25691.000000 25691.000000 25691.000000 2.569100e+04 \nmean -119.547305 35.556689 12846.000000 1.500277e+09 \nstd 2.023064 2.098932 7416.497219 2.870431e+08 \nmin -124.261685 32.558280 1.000000 1.003004e+09 \n25% -121.878240 33.931861 6423.500000 1.255367e+09 \n50% -118.445983 34.219683 12846.000000 1.508020e+09 \n75% -117.975432 37.756332 19268.500000 1.745514e+09 \nmax -114.599924 41.964287 25691.000000 1.993000e+09 \n\n Business_ZIP_Code Program_Year Payment_Year Payment_Year_Number \\\ncount 25691.000000 25691.000000 25691.000000 25691.000000 \nmean 92890.083531 2014.997976 2015.902067 2.196917 \nstd 1845.851756 2.440749 2.415981 1.630466 \nmin 90001.000000 2011.000000 2012.000000 1.000000 \n25% 91353.500000 2013.000000 2014.000000 1.000000 \n50% 92868.000000 2015.000000 2016.000000 1.000000 \n75% 94553.000000 2017.000000 2018.000000 3.000000 \nmax 96161.000000 2020.000000 2020.000000 6.000000 \n\n Payee_NPI total_payments total_recent_payments Latitude \\\ncount 2.569100e+04 25691.000000 25691.000000 25691.000000 \nmean 1.471618e+09 30993.723676 15284.829258 35.556689 \nstd 2.782789e+08 13789.971202 6402.156462 2.098932 \nmin 1.003001e+09 0.000000 0.000000 32.558280 \n25% 1.245343e+09 21250.000000 8500.000000 33.931861 \n50% 1.447262e+09 21250.000000 21250.000000 34.219683 \n75% 1.689864e+09 38250.000000 21250.000000 37.756332 \nmax 1.992998e+09 63750.000000 21250.000000 41.964287 \n\n Longitude \ncount 25691.000000 \nmean -119.547305 \nstd 2.023064 \nmin -124.261685 \n25% -121.878240 \n50% -118.445983 \n75% -117.975432 \nmax -114.599924 \n\n\n"
],
[
"ehr_data[\"Provider_Name\"]",
"_____no_output_____"
],
[
"ehr_data[['Last_Name', 'First_Name']] = ehr_data[\"Provider_Name\"].str.split(\", \", n = 1, expand = True) ",
"_____no_output_____"
],
[
"ehr_data",
"_____no_output_____"
],
[
"ehr_data['First_Name'] = ehr_data['First_Name'].str.capitalize()\n\n",
"_____no_output_____"
],
[
"ehr_data",
"_____no_output_____"
],
[
"ehr_data['Last_Name'] = ehr_data['Last_Name'].str.capitalize()",
"_____no_output_____"
],
[
"ehr_data",
"_____no_output_____"
],
[
"ehr_data['Provider_Name'] = ehr_data['Last_Name'].str.cat(ehr_data['First_Name'],sep=\", \")",
"_____no_output_____"
],
[
"ehr_data",
"_____no_output_____"
],
[
"ehr_data['Business_Street_Address'] = ehr_data['Business_Street_Address'].apply(lambda x: x.title())",
"_____no_output_____"
],
[
"ehr_data['Business_City'] = ehr_data['Business_City'].str.capitalize()",
"_____no_output_____"
],
[
"ehr_data['Business_County'] = ehr_data['Business_County'].str.capitalize()",
"_____no_output_____"
],
[
"ehr_data",
"_____no_output_____"
],
[
"ehr_data.to_csv(\"Electronic_Health_Record_clean.csv\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c202685372c17f5c7ea076bb0a321bee679cec | 112,659 | ipynb | Jupyter Notebook | Project/SageMaker Project.ipynb | JasperEssien2/SagemakerDeployment | bd909fc7a052912084877f9297ecddafd3abb02b | [
"MIT"
] | null | null | null | Project/SageMaker Project.ipynb | JasperEssien2/SagemakerDeployment | bd909fc7a052912084877f9297ecddafd3abb02b | [
"MIT"
] | null | null | null | Project/SageMaker Project.ipynb | JasperEssien2/SagemakerDeployment | bd909fc7a052912084877f9297ecddafd3abb02b | [
"MIT"
] | null | null | null | 54.688835 | 3,002 | 0.640339 | [
[
[
"# Creating a Sentiment Analysis Web App\n## Using PyTorch and SageMaker\n\n_Deep Learning Nanodegree Program | Deployment_\n\n---\n\nNow that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to enter a movie review. The web page will then send the review off to our deployed model which will predict the sentiment of the entered review.\n\n## Instructions\n\nSome template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `# TODO: ...` comment. Please be sure to read the instructions carefully!\n\nIn addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.\n\n> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.\n\n## General Outline\n\nRecall the general outline for SageMaker projects using a notebook instance.\n\n1. Download or otherwise retrieve the data.\n2. Process / Prepare the data.\n3. Upload the processed data to S3.\n4. Train a chosen model.\n5. Test the trained model (typically using a batch transform job).\n6. Deploy the trained model.\n7. Use the deployed model.\n\nFor this project, you will be following the steps in the general outline with some modifications. \n\nFirst, you will not be testing the model in its own step. You will still be testing the model, however, you will do it by deploying your model and then using the deployed model by sending the test data to it. One of the reasons for doing this is so that you can make sure that your deployed model is working correctly before moving forward.\n\nIn addition, you will deploy and use your trained model a second time. In the second iteration you will customize the way that your trained model is deployed by including some of your own code. In addition, your newly deployed model will be used in the sentiment analysis web app.",
"_____no_output_____"
],
[
"## Step 1: Downloading the data\n\nAs in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)\n\n> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.",
"_____no_output_____"
]
],
[
[
"%mkdir ../data\n!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz\n!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data",
"mkdir: cannot create directory ‘../data’: File exists\n--2020-07-01 11:14:49-- http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz\nResolving ai.stanford.edu (ai.stanford.edu)... 171.64.68.10\nConnecting to ai.stanford.edu (ai.stanford.edu)|171.64.68.10|:80... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 84125825 (80M) [application/x-gzip]\nSaving to: ‘../data/aclImdb_v1.tar.gz’\n\n../data/aclImdb_v1. 100%[===================>] 80.23M 24.4MB/s in 4.2s \n\n2020-07-01 11:14:54 (19.1 MB/s) - ‘../data/aclImdb_v1.tar.gz’ saved [84125825/84125825]\n\n"
]
],
[
[
"## Step 2: Preparing and Processing the data\n\nAlso, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a training set and a testing set.",
"_____no_output_____"
]
],
[
[
"import os\nimport glob\n\ndef read_imdb_data(data_dir='../data/aclImdb'):\n data = {}\n labels = {}\n \n for data_type in ['train', 'test']:\n data[data_type] = {}\n labels[data_type] = {}\n \n for sentiment in ['pos', 'neg']:\n data[data_type][sentiment] = []\n labels[data_type][sentiment] = []\n \n path = os.path.join(data_dir, data_type, sentiment, '*.txt')\n files = glob.glob(path)\n \n for f in files:\n with open(f) as review:\n data[data_type][sentiment].append(review.read())\n # Here we represent a positive review by '1' and a negative review by '0'\n labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)\n \n assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \\\n \"{}/{} data size does not match labels size\".format(data_type, sentiment)\n \n return data, labels",
"_____no_output_____"
],
[
"data, labels = read_imdb_data()\nprint(\"IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg\".format(\n len(data['train']['pos']), len(data['train']['neg']),\n len(data['test']['pos']), len(data['test']['neg'])))",
"IMDB reviews: train = 12500 pos / 12500 neg, test = 12500 pos / 12500 neg\n"
]
],
[
[
"Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records.",
"_____no_output_____"
]
],
[
[
"from sklearn.utils import shuffle\n\ndef prepare_imdb_data(data, labels):\n \"\"\"Prepare training and test sets from IMDb movie reviews.\"\"\"\n \n #Combine positive and negative reviews and labels\n data_train = data['train']['pos'] + data['train']['neg']\n data_test = data['test']['pos'] + data['test']['neg']\n labels_train = labels['train']['pos'] + labels['train']['neg']\n labels_test = labels['test']['pos'] + labels['test']['neg']\n \n #Shuffle reviews and corresponding labels within training and test sets\n data_train, labels_train = shuffle(data_train, labels_train)\n data_test, labels_test = shuffle(data_test, labels_test)\n \n # Return a unified training data, test data, training labels, test labets\n return data_train, data_test, labels_train, labels_test",
"_____no_output_____"
],
[
"train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)\nprint(\"IMDb reviews (combined): train = {}, test = {}\".format(len(train_X), len(test_X)))",
"IMDb reviews (combined): train = 25000, test = 25000\n"
]
],
[
[
"Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loaded correctly.",
"_____no_output_____"
]
],
[
[
"print(train_X[100])\nprint(train_y[100])",
"Upon The Straight Story release in 1999, it was praised for being David Lynch's first film that ignored his regular themes of the macabre and the surreal. Based on a true story of one man and his journey to visit his estranged brother on a John Deere '66 mower, at first glance its an odd story for Lynch to direct. Yet as the story develops you can see some of Lynch's trademark motifs coming through.<br /><br />Lynch's focus on small town America and its inhabitants is still as prevalent as in his previous efforts such as Blue Velvet or Twin Peaks, but the most notable difference is that the weirdness is curbed down. The restrictions imposed means that the film has the notable accolade of being one of the few live action films that I can think of that features a G rating. Incredibly significant, this films stands as evidence that beautiful and significant family films can be produced.<br /><br />The Straight Story was the first feature which Lynch directed where he had no hand at writing. For many Lynch devotees this was a huge negative point. Almost universally acclaimed, the only overly negative review by James Brundage of filmcritic.com focused on this very criticism, that it wasn't a typical Lynch film. \"Lynch is struggling within the mold of a G-Rated story that isn't his own.\" Brundage claims, with his protagonist Alvin Straight \"quoting lines directly from Confucious.\" He argues that the story is weak and the dialogue even worse. Yet this is about the only criticism that many will read for the film. Whilst it is true that it is not Lynch in the sense of Eraserhead, Lost Highway or Mulholland Drive - all films which I also adore, The Straight Story features a different side of Lynch that is by no means terrible. If you are a Lynch fan, it is most important to separate that side of Lynch with this feature.<br /><br />The narrative is slow and thoughtful, which gives you a real sense of the protagonist's thoughts as he travels to his destination. Alvin constantly is reminded about his past and his relationships with his wife, children and his brother. Yet particularly significant is that there are no flashbacks, which only adds to the effect, which reminded me of my conversations with my grandparents. The conclusion arrives like watching a boat being carried down a slow meandering river and it is beautiful to watch. The natural landscapes of the US are accentuated and together with the beautiful soundtrack by Angelo Badalamenti, makes me yearn to go to America. The performances are also excellent with every actor believable in their roles and Richard Farnsworth is particularly excellent. His Oscar nomination was greatly deserved and it was a shame that he didn't win. Regardless, however it is probably the finest swan-song for any actor. <br /><br />So whilst The Straight Story features none of Lynch's complex narratives or trademark dialogue, the film is a fascinating character study about getting old and comes highly recommended!\n1\n"
]
],
[
[
"The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis.",
"_____no_output_____"
]
],
[
[
"import nltk\nfrom nltk.corpus import stopwords\nfrom nltk.stem.porter import *\n\nimport re\nfrom bs4 import BeautifulSoup\n\ndef review_to_words(review):\n nltk.download(\"stopwords\", quiet=True)\n stemmer = PorterStemmer()\n \n text = BeautifulSoup(review, \"html.parser\").get_text() # Remove HTML tags\n text = re.sub(r\"[^a-zA-Z0-9]\", \" \", text.lower()) # Convert to lower case\n words = text.split() # Split string into words\n words = [w for w in words if w not in stopwords.words(\"english\")] # Remove stopwords\n words = [PorterStemmer().stem(w) for w in words] # stem\n \n return words",
"_____no_output_____"
]
],
[
[
"The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set.",
"_____no_output_____"
]
],
[
[
"# TODO: Apply review_to_words to a review (train_X[100] or any other review)\nreview = review_to_words(train_X[100])\nprint(review)",
"['upon', 'straight', 'stori', 'releas', '1999', 'prais', 'david', 'lynch', 'first', 'film', 'ignor', 'regular', 'theme', 'macabr', 'surreal', 'base', 'true', 'stori', 'one', 'man', 'journey', 'visit', 'estrang', 'brother', 'john', 'deer', '66', 'mower', 'first', 'glanc', 'odd', 'stori', 'lynch', 'direct', 'yet', 'stori', 'develop', 'see', 'lynch', 'trademark', 'motif', 'come', 'lynch', 'focu', 'small', 'town', 'america', 'inhabit', 'still', 'preval', 'previou', 'effort', 'blue', 'velvet', 'twin', 'peak', 'notabl', 'differ', 'weird', 'curb', 'restrict', 'impos', 'mean', 'film', 'notabl', 'accolad', 'one', 'live', 'action', 'film', 'think', 'featur', 'g', 'rate', 'incred', 'signific', 'film', 'stand', 'evid', 'beauti', 'signific', 'famili', 'film', 'produc', 'straight', 'stori', 'first', 'featur', 'lynch', 'direct', 'hand', 'write', 'mani', 'lynch', 'devote', 'huge', 'neg', 'point', 'almost', 'univers', 'acclaim', 'overli', 'neg', 'review', 'jame', 'brundag', 'filmcrit', 'com', 'focus', 'critic', 'typic', 'lynch', 'film', 'lynch', 'struggl', 'within', 'mold', 'g', 'rate', 'stori', 'brundag', 'claim', 'protagonist', 'alvin', 'straight', 'quot', 'line', 'directli', 'confuci', 'argu', 'stori', 'weak', 'dialogu', 'even', 'wors', 'yet', 'critic', 'mani', 'read', 'film', 'whilst', 'true', 'lynch', 'sens', 'eraserhead', 'lost', 'highway', 'mulholland', 'drive', 'film', 'also', 'ador', 'straight', 'stori', 'featur', 'differ', 'side', 'lynch', 'mean', 'terribl', 'lynch', 'fan', 'import', 'separ', 'side', 'lynch', 'featur', 'narr', 'slow', 'thought', 'give', 'real', 'sens', 'protagonist', 'thought', 'travel', 'destin', 'alvin', 'constantli', 'remind', 'past', 'relationship', 'wife', 'children', 'brother', 'yet', 'particularli', 'signific', 'flashback', 'add', 'effect', 'remind', 'convers', 'grandpar', 'conclus', 'arriv', 'like', 'watch', 'boat', 'carri', 'slow', 'meander', 'river', 'beauti', 'watch', 'natur', 'landscap', 'us', 'accentu', 'togeth', 'beauti', 'soundtrack', 'angelo', 'badalamenti', 'make', 'yearn', 'go', 'america', 'perform', 'also', 'excel', 'everi', 'actor', 'believ', 'role', 'richard', 'farnsworth', 'particularli', 'excel', 'oscar', 'nomin', 'greatli', 'deserv', 'shame', 'win', 'regardless', 'howev', 'probabl', 'finest', 'swan', 'song', 'actor', 'whilst', 'straight', 'stori', 'featur', 'none', 'lynch', 'complex', 'narr', 'trademark', 'dialogu', 'film', 'fascin', 'charact', 'studi', 'get', 'old', 'come', 'highli', 'recommend']\n"
]
],
[
[
"**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do to the input?",
"_____no_output_____"
],
[
"**Answer:**\n\n* It removes punctuation marks from text\n* It also converts all text to lower case",
"_____no_output_____"
],
[
"The method below applies the `review_to_words` method to each of the reviews in the training and testing datasets. In addition it caches the results. This is because performing this processing step can take a long time. This way if you are unable to complete the notebook in the current session, you can come back without needing to process the data a second time.",
"_____no_output_____"
]
],
[
[
"import pickle\n\ncache_dir = os.path.join(\"../cache\", \"sentiment_analysis\") # where to store cache files\nos.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists\n\ndef preprocess_data(data_train, data_test, labels_train, labels_test,\n cache_dir=cache_dir, cache_file=\"preprocessed_data.pkl\"):\n \"\"\"Convert each review to words; read from cache if available.\"\"\"\n\n # If cache_file is not None, try to read from it first\n cache_data = None\n if cache_file is not None:\n try:\n with open(os.path.join(cache_dir, cache_file), \"rb\") as f:\n cache_data = pickle.load(f)\n print(\"Read preprocessed data from cache file:\", cache_file)\n except:\n pass # unable to read from cache, but that's okay\n \n # If cache is missing, then do the heavy lifting\n if cache_data is None:\n # Preprocess training and test data to obtain words for each review\n #words_train = list(map(review_to_words, data_train))\n #words_test = list(map(review_to_words, data_test))\n words_train = [review_to_words(review) for review in data_train]\n words_test = [review_to_words(review) for review in data_test]\n \n # Write to cache file for future runs\n if cache_file is not None:\n cache_data = dict(words_train=words_train, words_test=words_test,\n labels_train=labels_train, labels_test=labels_test)\n with open(os.path.join(cache_dir, cache_file), \"wb\") as f:\n pickle.dump(cache_data, f)\n print(\"Wrote preprocessed data to cache file:\", cache_file)\n else:\n # Unpack data loaded from cache file\n words_train, words_test, labels_train, labels_test = (cache_data['words_train'],\n cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])\n \n return words_train, words_test, labels_train, labels_test",
"_____no_output_____"
],
[
"# Preprocess data\ntrain_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)",
"Read preprocessed data from cache file: preprocessed_data.pkl\n"
]
],
[
[
"## Transform the data\n\nIn the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of course, some of the words that appear in the reviews occur very infrequently and so likely don't contain much information for the purposes of sentiment analysis. The way we will deal with this problem is that we will fix the size of our working vocabulary and we will only include the words that appear most frequently. We will then combine all of the infrequent words into a single category and, in our case, we will label it as `1`.\n\nSince we will be using a recurrent neural network, it will be convenient if the length of each review is the same. To do this, we will fix a size for our reviews and then pad short reviews with the category 'no word' (which we will label `0`) and truncate long reviews.",
"_____no_output_____"
],
[
"### (TODO) Create a word dictionary\n\nTo begin with, we need to construct a way to map words that appear in the reviews to integers. Here we fix the size of our vocabulary (including the 'no word' and 'infrequent' categories) to be `5000` but you may wish to change this to see how it affects the model.\n\n> **TODO:** Complete the implementation for the `build_dict()` method below. Note that even though the vocab_size is set to `5000`, we only want to construct a mapping for the most frequently appearing `4998` words. This is because we want to reserve the special labels `0` for 'no word' and `1` for 'infrequent word'.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom collections import Counter\n\ndef build_dict(data, vocab_size = 5000):\n \"\"\"Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer.\"\"\"\n \n # TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a\n # sentence is a list of words.\n \n word_count = Counter() # A dict storing the words that appear in the reviews along with how often they occur\n \n for i in range(len(data)):\n for word in data[i]:\n word_count[word] += 1\n\n # TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and\n # sorted_words[-1] is the least frequently appearing word.\n sorted_words_count = word_count.most_common()\n sorted_words = []\n \n for i in range(len(sorted_words_count)):\n w, _i = sorted_words_count[i]\n sorted_words.append(w)\n \n word_dict = {} # This is what we are building, a dictionary that translates words into integers\n for idx, word in enumerate(sorted_words[:vocab_size - 2]): # The -2 is so that we save room for the 'no word'\n word_dict[word] = idx + 2 # 'infrequent' labels\n \n return word_dict",
"_____no_output_____"
],
[
"word_dict = build_dict(train_X)",
"_____no_output_____"
]
],
[
[
"**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set?",
"_____no_output_____"
],
[
"**Answer:**\n\nThe most frequently appearing words are:\n['movi', 'film', 'one', 'like', 'time']\n\nand it makes sense that this words appear frequently because they are all movies term",
"_____no_output_____"
]
],
[
[
"# TODO: Use this space to determine the five most frequently appearing words in the training set.\nprint(list(word_dict)[0:5])\n# print()",
"['movi', 'film', 'one', 'like', 'time']\n"
]
],
[
[
"### Save `word_dict`\n\nLater on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use.",
"_____no_output_____"
]
],
[
[
"data_dir = '../data/pytorch' # The folder we will use for storing data\nif not os.path.exists(data_dir): # Make sure that the folder exists\n os.makedirs(data_dir)",
"_____no_output_____"
],
[
"with open(os.path.join(data_dir, 'word_dict.pkl'), \"wb\") as f:\n pickle.dump(word_dict, f)",
"_____no_output_____"
]
],
[
[
"### Transform the reviews\n\nNow that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`.",
"_____no_output_____"
]
],
[
[
"def convert_and_pad(word_dict, sentence, pad=500):\n NOWORD = 0 # We will use 0 to represent the 'no word' category\n INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict\n \n working_sentence = [NOWORD] * pad\n \n for word_index, word in enumerate(sentence[:pad]):\n if word in word_dict:\n working_sentence[word_index] = word_dict[word]\n else:\n working_sentence[word_index] = INFREQ\n \n return working_sentence, min(len(sentence), pad)\n\ndef convert_and_pad_data(word_dict, data, pad=500):\n result = []\n lengths = []\n \n for sentence in data:\n converted, leng = convert_and_pad(word_dict, sentence, pad)\n result.append(converted)\n lengths.append(leng)\n \n return np.array(result), np.array(lengths)",
"_____no_output_____"
],
[
"train_X, train_X_len = convert_and_pad_data(word_dict, train_X)\ntest_X, test_X_len = convert_and_pad_data(word_dict, test_X)",
"_____no_output_____"
]
],
[
[
"As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set?",
"_____no_output_____"
]
],
[
[
"# Use this cell to examine one of the processed reviews to make sure everything is working as intended.\nprint(len(train_X[10]))\nprint(test_X_len[12])",
"500\n26\n"
]
],
[
[
"**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem?",
"_____no_output_____"
],
[
"**Answer:**\n\nWhy this might be a problem is, some sentences length are way less than 500, so we are just populating the rest of words with zero which might increase computational time and load",
"_____no_output_____"
],
[
"## Step 3: Upload the data to S3\n\nAs in the XGBoost notebook, we will need to upload the training dataset to S3 in order for our training code to access it. For now we will save it locally and we will upload to S3 later on.\n\n### Save the processed training dataset locally\n\nIt is important to note the format of the data that we are saving as we will need to know it when we write the training code. In our case, each row of the dataset has the form `label`, `length`, `review[500]` where `review[500]` is a sequence of `500` integers representing the words in the review.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n \npd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \\\n .to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)",
"_____no_output_____"
]
],
[
[
"### Uploading the training data\n\n\nNext, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model.",
"_____no_output_____"
]
],
[
[
"import sagemaker\n\nsagemaker_session = sagemaker.Session()\n\nbucket = sagemaker_session.default_bucket()\nprefix = 'sagemaker/sentiment_rnn'\n\nrole = sagemaker.get_execution_role()",
"_____no_output_____"
],
[
"input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)",
"_____no_output_____"
]
],
[
[
"**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also in the S3 training bucket) and that we will need to make sure it gets saved in the model directory.",
"_____no_output_____"
],
[
"## Step 4: Build and Train the PyTorch Model\n\nIn the XGBoost notebook we discussed what a model is in the SageMaker framework. In particular, a model comprises three objects\n\n - Model Artifacts,\n - Training Code, and\n - Inference Code,\n \neach of which interact with one another. In the XGBoost example we used training and inference code that was provided by Amazon. Here we will still be using containers provided by Amazon with the added benefit of being able to include our own custom code.\n\nWe will start by implementing our own neural network in PyTorch along with a training script. For the purposes of this project we have provided the necessary model object in the `model.py` file, inside of the `train` folder. You can see the provided implementation by running the cell below.",
"_____no_output_____"
]
],
[
[
"!pygmentize train/model.py",
"\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mtorch\u001b[39;49;00m\u001b[04m\u001b[36m.\u001b[39;49;00m\u001b[04m\u001b[36mnn\u001b[39;49;00m \u001b[34mas\u001b[39;49;00m \u001b[04m\u001b[36mnn\u001b[39;49;00m\r\n\r\n\u001b[34mclass\u001b[39;49;00m \u001b[04m\u001b[32mLSTMClassifier\u001b[39;49;00m(nn.Module):\r\n \u001b[33m\"\"\"\u001b[39;49;00m\r\n\u001b[33m This is the simple RNN model we will be using to perform Sentiment Analysis.\u001b[39;49;00m\r\n\u001b[33m \"\"\"\u001b[39;49;00m\r\n\r\n \u001b[34mdef\u001b[39;49;00m \u001b[32m__init__\u001b[39;49;00m(\u001b[36mself\u001b[39;49;00m, embedding_dim, hidden_dim, vocab_size):\r\n \u001b[33m\"\"\"\u001b[39;49;00m\r\n\u001b[33m Initialize the model by settingg up the various layers.\u001b[39;49;00m\r\n\u001b[33m \"\"\"\u001b[39;49;00m\r\n \u001b[36msuper\u001b[39;49;00m(LSTMClassifier, \u001b[36mself\u001b[39;49;00m).\u001b[32m__init__\u001b[39;49;00m()\r\n\r\n \u001b[36mself\u001b[39;49;00m.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=\u001b[34m0\u001b[39;49;00m)\r\n \u001b[36mself\u001b[39;49;00m.lstm = nn.LSTM(embedding_dim, hidden_dim)\r\n \u001b[36mself\u001b[39;49;00m.dense = nn.Linear(in_features=hidden_dim, out_features=\u001b[34m1\u001b[39;49;00m)\r\n \u001b[36mself\u001b[39;49;00m.sig = nn.Sigmoid()\r\n \r\n \u001b[36mself\u001b[39;49;00m.word_dict = \u001b[34mNone\u001b[39;49;00m\r\n\r\n \u001b[34mdef\u001b[39;49;00m \u001b[32mforward\u001b[39;49;00m(\u001b[36mself\u001b[39;49;00m, x):\r\n \u001b[33m\"\"\"\u001b[39;49;00m\r\n\u001b[33m Perform a forward pass of our model on some input.\u001b[39;49;00m\r\n\u001b[33m \"\"\"\u001b[39;49;00m\r\n x = x.t()\r\n lengths = x[\u001b[34m0\u001b[39;49;00m,:]\r\n reviews = x[\u001b[34m1\u001b[39;49;00m:,:]\r\n embeds = \u001b[36mself\u001b[39;49;00m.embedding(reviews)\r\n lstm_out, _ = \u001b[36mself\u001b[39;49;00m.lstm(embeds)\r\n out = \u001b[36mself\u001b[39;49;00m.dense(lstm_out)\r\n out = out[lengths - \u001b[34m1\u001b[39;49;00m, \u001b[36mrange\u001b[39;49;00m(\u001b[36mlen\u001b[39;49;00m(lengths))]\r\n \u001b[34mreturn\u001b[39;49;00m \u001b[36mself\u001b[39;49;00m.sig(out.squeeze())\r\n"
]
],
[
[
"The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training script so that if we wish to modify them we do not need to modify the script itself. We will see how to do this later on. To start we will write some of the training code in the notebook so that we can more easily diagnose any issues that arise.\n\nFirst we will load a small portion of the training data set to use as a sample. It would be very time consuming to try and train the model completely in the notebook as we do not have access to a gpu and the compute instance that we are using is not particularly powerful. However, we can work on a small bit of the data to get a feel for how our training script is behaving.",
"_____no_output_____"
]
],
[
[
"import torch\nimport torch.utils.data\n\n# Read in only the first 250 rows\ntrain_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)\n\n# Turn the input pandas dataframe into tensors\ntrain_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()\ntrain_sample_X = torch.from_numpy(train_sample.drop([0], axis=1).values).long()\n\n# Build the dataset\ntrain_sample_ds = torch.utils.data.TensorDataset(train_sample_X, train_sample_y)\n# Build the dataloader\ntrain_sample_dl = torch.utils.data.DataLoader(train_sample_ds, batch_size=50)",
"_____no_output_____"
]
],
[
[
"### (TODO) Writing the training method\n\nNext we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later.",
"_____no_output_____"
]
],
[
[
"def train(model, train_loader, epochs, optimizer, loss_fn, device):\n for epoch in range(1, epochs + 1):\n model.train()\n total_loss = 0\n for batch in train_loader: \n batch_X, batch_y = batch\n \n batch_X = batch_X.to(device)\n batch_y = batch_y.to(device)\n \n # TODO: Complete this train method to train the model provided.\n pred = model(batch_X)\n loss = loss_fn(pred, batch_y)\n loss.backward()\n optimizer.step()\n \n total_loss += loss.data.item()\n print(\"Epoch: {}, BCELoss: {}\".format(epoch, total_loss / len(train_loader)))",
"_____no_output_____"
]
],
[
[
"Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early when they are easier to diagnose.",
"_____no_output_____"
]
],
[
[
"import torch.optim as optim\nfrom train.model import LSTMClassifier\n\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\nmodel = LSTMClassifier(32, 100, 5000).to(device)\noptimizer = optim.Adam(model.parameters())\nloss_fn = torch.nn.BCELoss()\n\ntrain(model, train_sample_dl, 5, optimizer, loss_fn, device)",
"Epoch: 1, BCELoss: 0.6959220886230468\nEpoch: 2, BCELoss: 0.6813374280929565\nEpoch: 3, BCELoss: 0.6672957181930542\nEpoch: 4, BCELoss: 0.6517896890640259\nEpoch: 5, BCELoss: 0.6320561170578003\n"
]
],
[
[
"In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one) for a `requirements.txt` file and install any required Python libraries, after which the training script will be run.",
"_____no_output_____"
],
[
"### (TODO) Training the model\n\nWhen a PyTorch model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained. Inside of the `train` directory is a file called `train.py` which has been provided and which contains most of the necessary code to train our model. The only thing that is missing is the implementation of the `train()` method which you wrote earlier in this notebook.\n\n**TODO**: Copy the `train()` method written above and paste it into the `train/train.py` file where required.\n\nThe way that SageMaker passes hyperparameters to the training script is by way of arguments. These arguments can then be parsed and used in the training script. To see how this is done take a look at the provided `train/train.py` file.",
"_____no_output_____"
]
],
[
[
"from sagemaker.pytorch import PyTorch\n\nestimator = PyTorch(entry_point=\"train.py\",\n source_dir=\"train\",\n role=role,\n framework_version='0.4.0',\n train_instance_count=1,\n train_instance_type='ml.p2.xlarge',\n# image_name='sagemaker-pytorch-2020-06-29-01-11-40-917',\n hyperparameters={\n 'epochs': 25,\n 'hidden_dim': 300,\n })",
"_____no_output_____"
],
[
"estimator.fit({'training': input_data})",
"'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.\n's3_input' class will be renamed to 'TrainingInput' in SageMaker Python SDK v2.\n'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.\n"
]
],
[
[
"## Step 5: Testing the model\n\nAs mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly.\n\n## Step 6: Deploy the model for testing\n\nNow that we have trained our model, we would like to test it to see how it performs. Currently our model takes input of the form `review_length, review[500]` where `review[500]` is a sequence of `500` integers which describe the words present in the review, encoded using `word_dict`. Fortunately for us, SageMaker provides built-in inference code for models with simple inputs such as this.\n\nThere is one thing that we need to provide, however, and that is a function which loads the saved model. This function must be called `model_fn()` and takes as its only parameter a path to the directory where the model artifacts are stored. This function must also be present in the python file which we specified as the entry point. In our case the model loading function has been provided and so no changes need to be made.\n\n**NOTE**: When the built-in inference code is run it must import the `model_fn()` method from the `train.py` file. This is why the training code is wrapped in a main guard ( ie, `if __name__ == '__main__':` )\n\nSince we don't need to change anything in the code that was uploaded during training, we can simply deploy the current model as-is.\n\n**NOTE:** When deploying a model you are asking SageMaker to launch an compute instance that will wait for data to be sent to it. As a result, this compute instance will continue to run until *you* shut it down. This is important to know since the cost of a deployed endpoint depends on how long it has been running for.\n\nIn other words **If you are no longer using a deployed endpoint, shut it down!**\n\n**TODO:** Deploy the trained model.",
"_____no_output_____"
]
],
[
[
"# TODO: Deploy the trained model\npredictor = estimator.deploy(initial_instance_count=1,\n instance_type='ml.m4.xlarge')",
"Parameter image will be renamed to image_uri in SageMaker Python SDK v2.\n'create_image_uri' will be deprecated in favor of 'ImageURIProvider' class in SageMaker Python SDK v2.\n"
],
[
"# from sagemaker.pytorch import PyTorch\n\n# my_training_job_name = 'sagemaker-pytorch-2020-06-29-22-56-57-261'\n\n# estimator = PyTorch.attach(my_training_job_name)\n# estimator.fit({'training': input_data})",
"_____no_output_____"
],
[
"# predictor = estimator.deploy(initial_instance_count=1,\n# instance_type='ml.m4.xlarge')",
"_____no_output_____"
]
],
[
[
"## Step 7 - Use the model for testing\n\nOnce deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is.",
"_____no_output_____"
]
],
[
[
"test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)",
"_____no_output_____"
],
[
"# We split the data into chunks and send each chunk seperately, accumulating the results.\n\ndef predict(data, rows=512):\n split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))\n predictions = np.array([])\n for array in split_array:\n predictions = np.append(predictions, predictor.predict(array))\n \n return predictions",
"_____no_output_____"
],
[
"predictions = predict(test_X.values)\npredictions = [round(num) for num in predictions]",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score\naccuracy_score(test_y, predictions)",
"_____no_output_____"
]
],
[
[
"**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis?",
"_____no_output_____"
],
[
"**Answer:**\nThe XGBoost model performs more great than the custom model, custom model has an accuracy of 0.7728, but the later had 0.87232\n\nThe reason i think the two models perform differently is they both have different achitectures\n\nIn this case i think the XGBoost model is better than the custom LSTM model sentiment analysis, but i think the LSTM can be tuned to perform far more better",
"_____no_output_____"
],
[
"### (TODO) More testing\n\nWe now have a trained model which has been deployed and which we can send processed reviews to and which returns the predicted sentiment. However, ultimately we would like to be able to send our model an unprocessed review. That is, we would like to send the review itself as a string. For example, suppose we wish to send the following review to our model.",
"_____no_output_____"
]
],
[
[
"test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.'",
"_____no_output_____"
]
],
[
[
"The question we now need to answer is, how do we send this review to our model?\n\nRecall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews.\n - Removed any html tags and stemmed the input\n - Encoded the review as a sequence of integers using `word_dict`\n \nIn order process the review we will need to repeat these two steps.\n\n**TODO**: Using the `review_to_words` and `convert_and_pad` methods from section one, convert `test_review` into a numpy array `test_data` suitable to send to our model. Remember that our model expects input of the form `review_length, review[500]`.",
"_____no_output_____"
]
],
[
[
"# TODO: Convert test_review into a form usable by the model and save the results in test_data\nsentences, length = convert_and_pad(word_dict, review_to_words(test_review))\n\ntest_data = [0]\ntest_data[0] = np.append(np.array(length),np.array(sentences))",
"_____no_output_____"
]
],
[
[
"Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review.",
"_____no_output_____"
]
],
[
[
"predictor.predict(test_data)",
"_____no_output_____"
]
],
[
[
"Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive.",
"_____no_output_____"
],
[
"### Delete the endpoint\n\nOf course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delete it.",
"_____no_output_____"
]
],
[
[
"estimator.delete_endpoint()",
"_____no_output_____"
]
],
[
[
"## Step 6 (again) - Deploy the model for the web app\n\nNow that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.\n\nAs we saw above, by default the estimator which we created, when deployed, will use the entry script and directory which we provided when creating the model. However, since we now wish to accept a string as input and our model expects a processed review, we need to write some custom inference code.\n\nWe will store the code that we write in the `serve` directory. Provided in this directory is the `model.py` file that we used to construct our model, a `utils.py` file which contains the `review_to_words` and `convert_and_pad` pre-processing functions which we used during the initial data processing, and `predict.py`, the file which will contain our custom inference code. Note also that `requirements.txt` is present which will tell SageMaker what Python libraries are required by our custom inference code.\n\nWhen deploying a PyTorch model in SageMaker, you are expected to provide four functions which the SageMaker inference container will use.\n - `model_fn`: This function is the same function that we used in the training script and it tells SageMaker how to load our model.\n - `input_fn`: This function receives the raw serialized input that has been sent to the model's endpoint and its job is to de-serialize and make the input available for the inference code.\n - `output_fn`: This function takes the output of the inference code and its job is to serialize this output and return it to the caller of the model's endpoint.\n - `predict_fn`: The heart of the inference script, this is where the actual prediction is done and is the function which you will need to complete.\n\nFor the simple website that we are constructing during this project, the `input_fn` and `output_fn` methods are relatively straightforward. We only require being able to accept a string as input and we expect to return a single value as output. You might imagine though that in a more complex application the input or output may be image data or some other binary data which would require some effort to serialize.\n\n### (TODO) Writing inference code\n\nBefore writing our custom inference code, we will begin by taking a look at the code which has been provided.",
"_____no_output_____"
]
],
[
[
"!pygmentize serve/predict.py",
"\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36margparse\u001b[39;49;00m\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mjson\u001b[39;49;00m\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mos\u001b[39;49;00m\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mpickle\u001b[39;49;00m\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36msys\u001b[39;49;00m\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36msagemaker_containers\u001b[39;49;00m\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mpandas\u001b[39;49;00m \u001b[34mas\u001b[39;49;00m \u001b[04m\u001b[36mpd\u001b[39;49;00m\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mnumpy\u001b[39;49;00m \u001b[34mas\u001b[39;49;00m \u001b[04m\u001b[36mnp\u001b[39;49;00m\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mtorch\u001b[39;49;00m\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mtorch\u001b[39;49;00m\u001b[04m\u001b[36m.\u001b[39;49;00m\u001b[04m\u001b[36mnn\u001b[39;49;00m \u001b[34mas\u001b[39;49;00m \u001b[04m\u001b[36mnn\u001b[39;49;00m\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mtorch\u001b[39;49;00m\u001b[04m\u001b[36m.\u001b[39;49;00m\u001b[04m\u001b[36moptim\u001b[39;49;00m \u001b[34mas\u001b[39;49;00m \u001b[04m\u001b[36moptim\u001b[39;49;00m\n\u001b[34mimport\u001b[39;49;00m \u001b[04m\u001b[36mtorch\u001b[39;49;00m\u001b[04m\u001b[36m.\u001b[39;49;00m\u001b[04m\u001b[36mutils\u001b[39;49;00m\u001b[04m\u001b[36m.\u001b[39;49;00m\u001b[04m\u001b[36mdata\u001b[39;49;00m\n\n\u001b[34mfrom\u001b[39;49;00m \u001b[04m\u001b[36mmodel\u001b[39;49;00m \u001b[34mimport\u001b[39;49;00m LSTMClassifier\n\n\u001b[34mfrom\u001b[39;49;00m \u001b[04m\u001b[36mutils\u001b[39;49;00m \u001b[34mimport\u001b[39;49;00m review_to_words, convert_and_pad\n\n\u001b[34mdef\u001b[39;49;00m \u001b[32mmodel_fn\u001b[39;49;00m(model_dir):\n \u001b[33m\"\"\"Load the PyTorch model from the `model_dir` directory.\"\"\"\u001b[39;49;00m\n \u001b[36mprint\u001b[39;49;00m(\u001b[33m\"\u001b[39;49;00m\u001b[33mLoading model.\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m)\n\n \u001b[37m# First, load the parameters used to create the model.\u001b[39;49;00m\n model_info = {}\n model_info_path = os.path.join(model_dir, \u001b[33m'\u001b[39;49;00m\u001b[33mmodel_info.pth\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n \u001b[34mwith\u001b[39;49;00m \u001b[36mopen\u001b[39;49;00m(model_info_path, \u001b[33m'\u001b[39;49;00m\u001b[33mrb\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m) \u001b[34mas\u001b[39;49;00m f:\n model_info = torch.load(f)\n\n \u001b[36mprint\u001b[39;49;00m(\u001b[33m\"\u001b[39;49;00m\u001b[33mmodel_info: \u001b[39;49;00m\u001b[33m{}\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m.format(model_info))\n\n \u001b[37m# Determine the device and construct the model.\u001b[39;49;00m\n device = torch.device(\u001b[33m\"\u001b[39;49;00m\u001b[33mcuda\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m \u001b[34mif\u001b[39;49;00m torch.cuda.is_available() \u001b[34melse\u001b[39;49;00m \u001b[33m\"\u001b[39;49;00m\u001b[33mcpu\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m)\n model = LSTMClassifier(model_info[\u001b[33m'\u001b[39;49;00m\u001b[33membedding_dim\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m], model_info[\u001b[33m'\u001b[39;49;00m\u001b[33mhidden_dim\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m], model_info[\u001b[33m'\u001b[39;49;00m\u001b[33mvocab_size\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m])\n\n \u001b[37m# Load the store model parameters.\u001b[39;49;00m\n model_path = os.path.join(model_dir, \u001b[33m'\u001b[39;49;00m\u001b[33mmodel.pth\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n \u001b[34mwith\u001b[39;49;00m \u001b[36mopen\u001b[39;49;00m(model_path, \u001b[33m'\u001b[39;49;00m\u001b[33mrb\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m) \u001b[34mas\u001b[39;49;00m f:\n model.load_state_dict(torch.load(f))\n\n \u001b[37m# Load the saved word_dict.\u001b[39;49;00m\n word_dict_path = os.path.join(model_dir, \u001b[33m'\u001b[39;49;00m\u001b[33mword_dict.pkl\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n \u001b[34mwith\u001b[39;49;00m \u001b[36mopen\u001b[39;49;00m(word_dict_path, \u001b[33m'\u001b[39;49;00m\u001b[33mrb\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m) \u001b[34mas\u001b[39;49;00m f:\n model.word_dict = pickle.load(f)\n\n model.to(device).eval()\n\n \u001b[36mprint\u001b[39;49;00m(\u001b[33m\"\u001b[39;49;00m\u001b[33mDone loading model.\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m)\n \u001b[34mreturn\u001b[39;49;00m model\n\n\u001b[34mdef\u001b[39;49;00m \u001b[32minput_fn\u001b[39;49;00m(serialized_input_data, content_type):\n \u001b[36mprint\u001b[39;49;00m(\u001b[33m'\u001b[39;49;00m\u001b[33mDeserializing the input data.\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n \u001b[34mif\u001b[39;49;00m content_type == \u001b[33m'\u001b[39;49;00m\u001b[33mtext/plain\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m:\n data = serialized_input_data.decode(\u001b[33m'\u001b[39;49;00m\u001b[33mutf-8\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n \u001b[34mreturn\u001b[39;49;00m data\n \u001b[34mraise\u001b[39;49;00m \u001b[36mException\u001b[39;49;00m(\u001b[33m'\u001b[39;49;00m\u001b[33mRequested unsupported ContentType in content_type: \u001b[39;49;00m\u001b[33m'\u001b[39;49;00m + content_type)\n\n\u001b[34mdef\u001b[39;49;00m \u001b[32moutput_fn\u001b[39;49;00m(prediction_output, accept):\n \u001b[36mprint\u001b[39;49;00m(\u001b[33m'\u001b[39;49;00m\u001b[33mSerializing the generated output.\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n \u001b[34mreturn\u001b[39;49;00m \u001b[36mstr\u001b[39;49;00m(prediction_output)\n\n\u001b[34mdef\u001b[39;49;00m \u001b[32mpredict_fn\u001b[39;49;00m(input_data, model):\n \u001b[36mprint\u001b[39;49;00m(\u001b[33m'\u001b[39;49;00m\u001b[33mInferring sentiment of input data.\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n\n device = torch.device(\u001b[33m\"\u001b[39;49;00m\u001b[33mcuda\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m \u001b[34mif\u001b[39;49;00m torch.cuda.is_available() \u001b[34melse\u001b[39;49;00m \u001b[33m\"\u001b[39;49;00m\u001b[33mcpu\u001b[39;49;00m\u001b[33m\"\u001b[39;49;00m)\n \n \u001b[34mif\u001b[39;49;00m model.word_dict \u001b[35mis\u001b[39;49;00m \u001b[34mNone\u001b[39;49;00m:\n \u001b[34mraise\u001b[39;49;00m \u001b[36mException\u001b[39;49;00m(\u001b[33m'\u001b[39;49;00m\u001b[33mModel has not been loaded properly, no word_dict.\u001b[39;49;00m\u001b[33m'\u001b[39;49;00m)\n \n \u001b[37m# TODO: Process input_data so that it is ready to be sent to our model.\u001b[39;49;00m\n \u001b[37m# You should produce two variables:\u001b[39;49;00m\n \u001b[37m# data_X - A sequence of length 500 which represents the converted review\u001b[39;49;00m\n \u001b[37m# data_len - The length of the review\u001b[39;49;00m\n \n convert = convert_and_pad(model.word_dict, review_to_words(input_data))\n\n data_X = convert[\u001b[34m0\u001b[39;49;00m]\n data_len = convert[\u001b[34m1\u001b[39;49;00m]\n\n \u001b[37m# Using data_X and data_len we construct an appropriate input tensor. Remember\u001b[39;49;00m\n \u001b[37m# that our model expects input data of the form 'len, review[500]'.\u001b[39;49;00m\n data_pack = np.hstack((data_len, data_X))\n data_pack = data_pack.reshape(\u001b[34m1\u001b[39;49;00m, -\u001b[34m1\u001b[39;49;00m)\n \n data = torch.from_numpy(data_pack)\n data = data.to(device)\n\n \u001b[37m# Make sure to put the model into evaluation mode\u001b[39;49;00m\n model.eval()\n\n \u001b[37m# TODO: Compute the result of applying the model to the input data. The variable `result` should\u001b[39;49;00m\n \u001b[37m# be a numpy array which contains a single integer which is either 1 or 0\u001b[39;49;00m\n \u001b[34mwith\u001b[39;49;00m torch.no_grad():\n result = model(data)\n\n \u001b[34mreturn\u001b[39;49;00m np.round(result.numpy())\n"
]
],
[
[
"As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.\n\n**TODO**: Complete the `predict_fn()` method in the `serve/predict.py` file.",
"_____no_output_____"
],
[
"### Deploying the model\n\nNow that the custom inference code has been written, we will create and deploy our model. To begin with, we need to construct a new PyTorchModel object which points to the model artifacts created during training and also points to the inference code that we wish to use. Then we can call the deploy method to launch the deployment container.\n\n**NOTE**: The default behaviour for a deployed PyTorch model is to assume that any input passed to the predictor is a `numpy` array. In our case we want to send a string so we need to construct a simple wrapper around the `RealTimePredictor` class to accomodate simple strings. In a more complicated situation you may want to provide a serialization object, for example if you wanted to sent image data.",
"_____no_output_____"
]
],
[
[
"from sagemaker.predictor import RealTimePredictor\nfrom sagemaker.pytorch import PyTorchModel\nfrom sagemaker.pytorch import PyTorch\n\nmy_training_job_name = 'sagemaker-pytorch-2020-07-01-07-19-11-965'\n\nestimator = PyTorch.attach(my_training_job_name)\n\n\nclass StringPredictor(RealTimePredictor):\n def __init__(self, endpoint_name, sagemaker_session):\n super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')\n\nmodel = PyTorchModel(model_data=estimator.model_data,\n role = role,\n framework_version='0.4.0',\n entry_point='predict.py',\n source_dir='serve',\n predictor_cls=StringPredictor)\npredictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')",
"Parameter image will be renamed to image_uri in SageMaker Python SDK v2.\n"
]
],
[
[
"### Testing the model\n\nNow that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is that the amount of time it takes for our model to process the input and then perform inference is quite long and so testing the entire data set would be prohibitive.",
"_____no_output_____"
]
],
[
[
"import glob\n\ndef test_reviews(data_dir='../data/aclImdb', stop=250):\n \n results = []\n ground = []\n \n # We make sure to test both positive and negative reviews \n for sentiment in ['pos', 'neg']:\n \n path = os.path.join(data_dir, 'test', sentiment, '*.txt')\n files = glob.glob(path)\n \n files_read = 0\n \n print('Starting ', sentiment, ' files')\n \n # Iterate through the files and send them to the predictor\n for f in files:\n with open(f) as review:\n # First, we store the ground truth (was the review positive or negative)\n if sentiment == 'pos':\n ground.append(1)\n else:\n ground.append(0)\n # Read in the review and convert to 'utf-8' for transmission via HTTP\n review_input = review.read().encode('utf-8')\n # Send the review to the predictor and store the results\n results.append(float(predictor.predict(review_input)))\n \n # Sending reviews to our endpoint one at a time takes a while so we\n # only send a small number of reviews\n files_read += 1\n if files_read == stop:\n break\n \n return ground, results",
"_____no_output_____"
],
[
"ground, results = test_reviews()",
"Starting pos files\nStarting neg files\n"
],
[
"from sklearn.metrics import accuracy_score\naccuracy_score(ground, results)",
"_____no_output_____"
]
],
[
[
"As an additional test, we can try sending the `test_review` that we looked at earlier.",
"_____no_output_____"
]
],
[
[
"test_review = \"a very very bad review\"\npredictor.predict(test_review)",
"_____no_output_____"
]
],
[
[
"Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back.",
"_____no_output_____"
],
[
"## Step 7 (again): Use the model for the web app\n\n> **TODO:** This entire section and the next contain tasks for you to complete, mostly using the AWS console.\n\nSo far we have been accessing our model endpoint by constructing a predictor object which uses the endpoint and then just using the predictor object to perform inference. What if we wanted to create a web app which accessed our model? The way things are set up currently makes that not possible since in order to access a SageMaker endpoint the app would first have to authenticate with AWS using an IAM role which included access to SageMaker endpoints. However, there is an easier way! We just need to use some additional AWS services.\n\n<img src=\"Web App Diagram.svg\">\n\nThe diagram above gives an overview of how the various services will work together. On the far right is the model which we trained above and which is deployed using SageMaker. On the far left is our web app that collects a user's movie review, sends it off and expects a positive or negative sentiment in return.\n\nIn the middle is where some of the magic happens. We will construct a Lambda function, which you can think of as a straightforward Python function that can be executed whenever a specified event occurs. We will give this function permission to send and recieve data from a SageMaker endpoint.\n\nLastly, the method we will use to execute the Lambda function is a new endpoint that we will create using API Gateway. This endpoint will be a url that listens for data to be sent to it. Once it gets some data it will pass that data on to the Lambda function and then return whatever the Lambda function returns. Essentially it will act as an interface that lets our web app communicate with the Lambda function.\n\n### Setting up a Lambda function\n\nThe first thing we are going to do is set up a Lambda function. This Lambda function will be executed whenever our public API has data sent to it. When it is executed it will receive the data, perform any sort of processing that is required, send the data (the review) to the SageMaker endpoint we've created and then return the result.\n\n#### Part A: Create an IAM Role for the Lambda function\n\nSince we want the Lambda function to call a SageMaker endpoint, we need to make sure that it has permission to do so. To do this, we will construct a role that we can later give the Lambda function.\n\nUsing the AWS Console, navigate to the **IAM** page and click on **Roles**. Then, click on **Create role**. Make sure that the **AWS service** is the type of trusted entity selected and choose **Lambda** as the service that will use this role, then click **Next: Permissions**.\n\nIn the search box type `sagemaker` and select the check box next to the **AmazonSageMakerFullAccess** policy. Then, click on **Next: Review**.\n\nLastly, give this role a name. Make sure you use a name that you will remember later on, for example `LambdaSageMakerRole`. Then, click on **Create role**.\n\n#### Part B: Create a Lambda function\n\nNow it is time to actually create the Lambda function.\n\nUsing the AWS Console, navigate to the AWS Lambda page and click on **Create a function**. When you get to the next page, make sure that **Author from scratch** is selected. Now, name your Lambda function, using a name that you will remember later on, for example `sentiment_analysis_func`. Make sure that the **Python 3.6** runtime is selected and then choose the role that you created in the previous part. Then, click on **Create Function**.\n\nOn the next page you will see some information about the Lambda function you've just created. If you scroll down you should see an editor in which you can write the code that will be executed when your Lambda function is triggered. In our example, we will use the code below. \n\n```python\n# We need to use the low-level library to interact with SageMaker since the SageMaker API\n# is not available natively through Lambda.\nimport boto3\n\ndef lambda_handler(event, context):\n\n # The SageMaker runtime is what allows us to invoke the endpoint that we've created.\n runtime = boto3.Session().client('sagemaker-runtime')\n\n # Now we use the SageMaker runtime to invoke our endpoint, sending the review we were given\n response = runtime.invoke_endpoint(EndpointName = '**ENDPOINT NAME HERE**', # The name of the endpoint we created\n ContentType = 'text/plain', # The data format that is expected\n Body = event['body']) # The actual review\n\n # The response is an HTTP response whose body contains the result of our inference\n result = response['Body'].read().decode('utf-8')\n\n return {\n 'statusCode' : 200,\n 'headers' : { 'Content-Type' : 'text/plain', 'Access-Control-Allow-Origin' : '*' },\n 'body' : result\n }\n```\n\nOnce you have copy and pasted the code above into the Lambda code editor, replace the `**ENDPOINT NAME HERE**` portion with the name of the endpoint that we deployed earlier. You can determine the name of the endpoint using the code cell below.",
"_____no_output_____"
]
],
[
[
"predictor.endpoint",
"_____no_output_____"
]
],
[
[
"Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function.\n\n### Setting up API Gateway\n\nNow that our Lambda function is set up, it is time to create a new API using API Gateway that will trigger the Lambda function we have just created.\n\nUsing AWS Console, navigate to **Amazon API Gateway** and then click on **Get started**.\n\nOn the next page, make sure that **New API** is selected and give the new api a name, for example, `sentiment_analysis_api`. Then, click on **Create API**.\n\nNow we have created an API, however it doesn't currently do anything. What we want it to do is to trigger the Lambda function that we created earlier.\n\nSelect the **Actions** dropdown menu and click **Create Method**. A new blank method will be created, select its dropdown menu and select **POST**, then click on the check mark beside it.\n\nFor the integration point, make sure that **Lambda Function** is selected and click on the **Use Lambda Proxy integration**. This option makes sure that the data that is sent to the API is then sent directly to the Lambda function with no processing. It also means that the return value must be a proper response object as it will also not be processed by API Gateway.\n\nType the name of the Lambda function you created earlier into the **Lambda Function** text entry box and then click on **Save**. Click on **OK** in the pop-up box that then appears, giving permission to API Gateway to invoke the Lambda function you created.\n\nThe last step in creating the API Gateway is to select the **Actions** dropdown and click on **Deploy API**. You will need to create a new Deployment stage and name it anything you like, for example `prod`.\n\nYou have now successfully set up a public API to access your SageMaker model. Make sure to copy or write down the URL provided to invoke your newly created public API as this will be needed in the next step. This URL can be found at the top of the page, highlighted in blue next to the text **Invoke URL**.",
"_____no_output_____"
],
[
"## Step 4: Deploying our web app\n\nNow that we have a publicly available API, we can start using it in a web app. For our purposes, we have provided a simple static html file which can make use of the public api you created earlier.\n\nIn the `website` folder there should be a file called `index.html`. Download the file to your computer and open that file up in a text editor of your choice. There should be a line which contains **\\*\\*REPLACE WITH PUBLIC API URL\\*\\***. Replace this string with the url that you wrote down in the last step and then save the file.\n\nNow, if you open `index.html` on your local computer, your browser will behave as a local web server and you can use the provided site to interact with your SageMaker model.\n\nIf you'd like to go further, you can host this html file anywhere you'd like, for example using github or hosting a static site on Amazon's S3. Once you have done this you can share the link with anyone you'd like and have them play with it too!\n\n> **Important Note** In order for the web app to communicate with the SageMaker endpoint, the endpoint has to actually be deployed and running. This means that you are paying for it. Make sure that the endpoint is running when you want to use the web app but that you shut it down when you don't need it, otherwise you will end up with a surprisingly large AWS bill.\n\n**TODO:** Make sure that you include the edited `index.html` file in your project submission.",
"_____no_output_____"
],
[
"Now that your web app is working, trying playing around with it and see how well it works.\n\n**Question**: Give an example of a review that you entered into your web app. What was the predicted sentiment of your example review?",
"_____no_output_____"
],
[
"**Answer:**\nReview: What the heck\nPrediction: NEGATIVE (predicted correctly)",
"_____no_output_____"
],
[
"### Delete the endpoint\n\nRemember to always shut down your endpoint if you are no longer using it. You are charged for the length of time that the endpoint is running so if you forget and leave it on you could end up with an unexpectedly large bill.",
"_____no_output_____"
]
],
[
[
"predictor.delete_endpoint()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
d0c20b405a939e9ec6b1e7d52a4e379948c38e46 | 11,271 | ipynb | Jupyter Notebook | database_engineering.ipynb | karawenz01/homework_11 | 51fdfc8d0ce382f7038b6da215fd4da722701aed | [
"MIT"
] | null | null | null | database_engineering.ipynb | karawenz01/homework_11 | 51fdfc8d0ce382f7038b6da215fd4da722701aed | [
"MIT"
] | null | null | null | database_engineering.ipynb | karawenz01/homework_11 | 51fdfc8d0ce382f7038b6da215fd4da722701aed | [
"MIT"
] | null | null | null | 27.82963 | 92 | 0.394197 | [
[
[
"# import pandas\nimport pandas as pd",
"_____no_output_____"
],
[
"# import csv and read w pandas\nmeasurments_df = pd.read_csv('clean_measurements.csv')",
"_____no_output_____"
],
[
"measurments_df.head()",
"_____no_output_____"
],
[
"stations_df = pd.read_csv('clean_stations.csv')",
"_____no_output_____"
],
[
"stations_df",
"_____no_output_____"
],
[
"# import dependencies for sql\nimport sqlalchemy\nfrom sqlalchemy.ext.automap import automap_base\nfrom sqlalchemy.orm import Session\nfrom sqlalchemy import create_engine, inspect",
"_____no_output_____"
],
[
"# Create the connection engine\nengine = create_engine(\"sqlite:///.hawaii.sqlite\", echo = False)\nconn = engine.connect()",
"_____no_output_____"
],
[
"# Dependencies\n# ----------------------------------\nfrom sqlalchemy.ext.declarative import declarative_base\nBase = declarative_base()",
"_____no_output_____"
],
[
"# more dependencies\nfrom sqlalchemy import Column, Integer, String, Float, Date",
"_____no_output_____"
],
[
"class Measurement(Base):\n __tablename__ = 'measurments'\n id = Column(Integer, primary_key=True)\n station = Column(String(225))\n date = Column(Date)\n prcp = Column(Integer)\n tobs = Column(Integer)\n ",
"_____no_output_____"
],
[
"class Station(Base):\n __tablename__ = 'station'\n id = Column(Integer, primary_key=True)\n station = Column(String(255))\n name = Column(String(255))\n latitude = Column(Integer)\n longitude = Column(Integer)\n elevation = Column(Integer)",
"_____no_output_____"
],
[
"Base.metadata.create_all(conn)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c20ec5b5acdcb9df8cc8a084bbf0267824e581 | 2,967 | ipynb | Jupyter Notebook | notebooks/2020.09.30_models.ipynb | wconnell/film-gex | 09c93cce97a56b6584ebf002eb0240077074ccd8 | [
"Apache-2.0"
] | null | null | null | notebooks/2020.09.30_models.ipynb | wconnell/film-gex | 09c93cce97a56b6584ebf002eb0240077074ccd8 | [
"Apache-2.0"
] | null | null | null | notebooks/2020.09.30_models.ipynb | wconnell/film-gex | 09c93cce97a56b6584ebf002eb0240077074ccd8 | [
"Apache-2.0"
] | 1 | 2020-11-29T23:34:41.000Z | 2020-11-29T23:34:41.000Z | 20.462069 | 77 | 0.529491 | [
[
[
"%load_ext autoreload\n%autoreload 2\n\nimport sys\nimport os\nmodule_path = os.path.abspath(os.path.join(os.pardir))\nif module_path not in sys.path:\n sys.path.append(module_path)",
"_____no_output_____"
],
[
"import os\nimport torch\nfrom torch import nn\nimport torch.nn.functional as F\nfrom torch.utils.data import DataLoader\nfrom torchvision import transforms\nimport pytorch_lightning as pl\nfrom pytorch_lightning.metrics.sklearns import R2Score\n\nimport pandas as pd\nimport numpy as np\nimport joblib\nfrom pathlib import Path\n\nfrom sklearn.preprocessing import StandardScaler\n\n# Custom\nfrom project.film_model import LinearBlock, FiLMGenerator",
"_____no_output_____"
]
],
[
[
"## FiLM Structures",
"_____no_output_____"
]
],
[
[
"from project.film_model import FiLMNetwork, ConcatNetwork",
"_____no_output_____"
],
[
"bar = ConcatNetwork(978, 513)",
"_____no_output_____"
],
[
"bar.hparams",
"_____no_output_____"
],
[
"bar.inputs_emb.out_sz + bar.conds_emb.out_sz",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0c2133f3a217e39ec3fc22ae0b39084599750f5 | 78,627 | ipynb | Jupyter Notebook | ImageColorizerStableTests.ipynb | ssgalitsky/DeOldify | dcae8e982656826d3f348a7ce70fe0f191d7054a | [
"MIT"
] | 5 | 2019-11-01T02:37:42.000Z | 2022-03-23T11:32:17.000Z | ImageColorizerStableTests.ipynb | ssgalitsky/DeOldify | dcae8e982656826d3f348a7ce70fe0f191d7054a | [
"MIT"
] | null | null | null | ImageColorizerStableTests.ipynb | ssgalitsky/DeOldify | dcae8e982656826d3f348a7ce70fe0f191d7054a | [
"MIT"
] | 1 | 2021-03-14T10:34:02.000Z | 2021-03-14T10:34:02.000Z | 23.611712 | 135 | 0.596309 | [
[
[
"import os\nos.environ['CUDA_VISIBLE_DEVICES']='0' ",
"_____no_output_____"
],
[
"from fasterai.visualize import *\nplt.style.use('dark_background')",
"_____no_output_____"
],
[
"#Adjust render_factor (int) if image doesn't look quite right (max 64 on 11GB GPU). The default here works for most photos. \n#It literally just is a number multiplied by 16 to get the square render resolution. \n#Note that this doesn't affect the resolution of the final output- the output is the same resolution as the input.\n#Example: render_factor=21 => color is rendered at 16x21 = 336x336 px. \nrender_factor=35",
"_____no_output_____"
],
[
"vis = get_image_colorizer(render_factor=render_factor, artistic=False)\n#vis = get_video_colorizer(render_factor=render_factor).vis",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/poolparty.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1852GatekeepersWindsor.jpg\", render_factor=44, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Chief.jpg\", render_factor=10, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1850SchoolForGirls.jpg\", render_factor=42, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/AtlanticCityBeach1905.jpg\", render_factor=32, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/CottonMillWorkers1913.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/BrooklynNavyYardHospital.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/FinnishPeasant1867.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/AtlanticCity1905.png\", render_factor=40, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/PushingCart.jpg\", render_factor=24, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Drive1905.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/IronLung.png\", render_factor=26, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/FamilyWithDog.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/DayAtSeaBelgium.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/marilyn_woods.jpg\", render_factor=16, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/OldWomanSweden1904.jpg\", render_factor=20, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/WomenTapingPlanes.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/overmiller.jpg\", render_factor=30, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/BritishDispatchRider.jpg\", render_factor=16, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/MuseauNacionalDosCoches.jpg\", render_factor=19, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/abe.jpg\", render_factor=13, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/RossCorbettHouseCork.jpg\", render_factor=40, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/HPLabelleOfficeMontreal.jpg\", render_factor=44, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/einstein_beach.jpg\", render_factor=32, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/airmen1943.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/20sWoman.jpg\", render_factor=24, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/egypt-1.jpg\", render_factor=18, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Rutherford_Hayes.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/einstein_portrait.jpg\", render_factor=15, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/pinkerton.jpg\", render_factor=7, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/WaltWhitman.jpg\", render_factor=9, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/dorothea-lange.jpg\", render_factor=18, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Hemmingway2.jpg\", render_factor=22, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/hemmingway.jpg\", render_factor=14, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/smoking_kid.jpg\", render_factor=35, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/teddy_rubble.jpg\", render_factor=42, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/dustbowl_2.jpg\", render_factor=16, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/camera_man.jpg\", render_factor=25, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/migrant_mother.jpg\", render_factor=32, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/marktwain.jpg\", render_factor=14, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/HelenKeller.jpg\", render_factor=35, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Evelyn_Nesbit.jpg\", render_factor=25, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Eddie-Adams.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/soldier_kids.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/AnselAdamsYosemite.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/unnamed.jpg\", render_factor=28, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/workers_canyon.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/CottonMill.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/JudyGarland.jpeg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/kids_pit.jpg\", render_factor=30, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/last_samurai.jpg\", render_factor=22, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/AnselAdamsWhiteChurch.jpg\", render_factor=25, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/opium.jpg\", render_factor=30, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/dorothea_lange_2.jpg\", render_factor=42, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/rgs.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/wh-auden.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/w-b-yeats.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/marilyn_portrait.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/wilson-slaverevivalmeeting.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/ww1_trench.jpg\", render_factor=18, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/women-bikers.png\", render_factor=23, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Unidentified1855.jpg\", render_factor=19, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/skycrapper_lunch.jpg\", render_factor=25, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/sioux.jpg\", render_factor=28, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/school_kids.jpg\", render_factor=20, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/royal_family.jpg\", render_factor=42, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/redwood_lumberjacks.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/poverty.jpg\", render_factor=40, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/paperboy.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/NativeAmericans.jpg\", render_factor=21, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/helmut_newton-.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Greece1911.jpg\", render_factor=44, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/FatMenClub.jpg\", render_factor=18, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/EgyptColosus.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/egypt-2.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/dustbowl_sd.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/dustbowl_people.jpg\", render_factor=24, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/dustbowl_5.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/dustbowl_1.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/DriveThroughGiantTree.jpg\", render_factor=21, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/covered-wagons-traveling.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/civil-war_2.jpg\", render_factor=42, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/civil_war_4.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/civil_war_3.jpg\", render_factor=28, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/civil_war.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/BritishSlum.jpg\", render_factor=30, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/bicycles.jpg\", render_factor=27, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/brooklyn_girls_1940s.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/40sCouple.jpg\", render_factor=21, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1946Wedding.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Dolores1920s.jpg\", render_factor=18, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/TitanicGym.jpg\", render_factor=26, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/FrenchVillage1950s.jpg\", render_factor=41, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/FrenchVillage1950s.jpg\", render_factor=32, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/ClassDivide1930sBrittain.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1870sSphinx.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1890Surfer.png\", render_factor=37, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/TV1930s.jpg\", render_factor=43, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1864UnionSoldier.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1890sMedStudents.jpg\", render_factor=18, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/BellyLaughWWI.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/PiggyBackRide.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/HealingTree.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/ManPile.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1910Bike.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/FreeportIL.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/DutchBabyCoupleEllis.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/InuitWoman1903.png\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1920sDancing.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/AirmanDad.jpg\", render_factor=13, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1910Racket.png\", render_factor=30, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1880Paris.jpg\", render_factor=16, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Deadwood1860s.jpg\", render_factor=13, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1860sSamauris.jpg\", render_factor=43, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/LondonUnderground1860.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Mid1800sSisters.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1860Girls.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/SanFran1851.jpg\", render_factor=44, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Kabuki1870s.png\", render_factor=8, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Mormons1870s.jpg\", render_factor=44, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/EgyptianWomenLate1800s.jpg\", render_factor=44, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/PicadillyLate1800s.jpg\", render_factor=26, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/SutroBaths1880s.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1880sBrooklynBridge.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/ChinaOpiumc1880.jpg\", render_factor=30, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Locomotive1880s.jpg\", render_factor=9, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/ViennaBoys1880s.png\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/VictorianDragQueen1880s.png\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Sami1880s.jpg\", render_factor=44, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/ArkansasCowboys1880s.jpg\", render_factor=22, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Ballet1890Russia.jpg\", render_factor=40, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Rottindean1890s.png\", render_factor=20, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1890sPingPong.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/London1937.png\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Harlem1932.jpg\", render_factor=37, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/OregonTrail1870s.jpg\", render_factor=40, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/EasterNyc1911.jpg\", render_factor=19, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1899NycBlizzard.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Edinburgh1920s.jpg\", render_factor=17, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1890sShoeShopOhio.jpg\", render_factor=46, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1890sTouristsEgypt.png\", render_factor=40, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1938Reading.jpg\", render_factor=19, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1850Geography.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1901Electrophone.jpg\", render_factor=10, compare=True)",
"_____no_output_____"
],
[
"for i in range(8, 47):\n vis.plot_transformed_image(\"test_images/1901Electrophone.jpg\", render_factor=i, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Texas1938Woman.png\", render_factor=38, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/MaioreWoman1895NZ.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/WestVirginiaHouse.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1920sGuadalope.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1909Chicago.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1920sFarmKid.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/ParisLate1800s.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1900sDaytonaBeach.png\", render_factor=23, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1930sGeorgia.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/NorwegianBride1920s.jpg\", render_factor=30, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Depression.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1888Slum.jpg\", render_factor=30, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/LivingRoom1920Sweden.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1896NewsBoyGirl.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/PetDucks1927.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1899SodaFountain.jpg\", render_factor=46, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/TimesSquare1955.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/PuppyGify.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1890CliffHouseSF.jpg\", render_factor=30, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1908FamilyPhoto.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1900sSaloon.jpg\", render_factor=43, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1890BostonHospital.jpg\", render_factor=40, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1870Girl.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/AustriaHungaryWomen1890s.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Shack.jpg\",render_factor=42, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Apsaroke1908.png\", render_factor=35, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1948CarsGrandma.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/PlanesManhattan1931.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/WorriedKid1940sNyc.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1920sFamilyPhoto.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/CatWash1931.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1940sBeerRiver.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/VictorianLivingRoom.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1897BlindmansBluff.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1874Mexico.png\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/MadisonSquare1900.jpg\", render_factor=46, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1867MusicianConstantinople.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1925Girl.jpg\", render_factor=25, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1907Cowboys.jpg\", render_factor=28, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/WWIIPeeps.jpg\", render_factor=37, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/BabyBigBoots.jpg\", render_factor=40, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1895BikeMaidens.jpg\", render_factor=25, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/IrishLate1800s.jpg\", render_factor=25, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/LibraryOfCongress1910.jpg\", render_factor=21, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1875Olds.jpg\", render_factor=16, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/SenecaNative1908.jpg\", render_factor=30, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/WWIHospital.jpg\", render_factor=40, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1892WaterLillies.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/GreekImmigrants1905.jpg\", render_factor=25, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/FatMensShop.jpg\", render_factor=21, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/KidCage1930s.png\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/FarmWomen1895.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/NewZealand1860s.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/JerseyShore1905.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/LondonKidsEarly1900s.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/NYStreetClean1906.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Boston1937.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Cork1905.jpg\", render_factor=28, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/BoxedBedEarly1900s.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/ZoologischerGarten1898.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/EmpireState1930.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Agamemnon1919.jpg\", render_factor=40, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/AppalachianLoggers1901.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/WWISikhs.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/MementoMori1865.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/RepBrennanRadio1922.jpg\", render_factor=43, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Late1800sNative.jpg\", render_factor=20, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/GasPrices1939.jpg\", render_factor=30, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1933RockefellerCenter.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Scotland1919.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1920CobblersShopLondon.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1909ParisFirstFemaleTaxisDriver.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/HoovervilleSeattle1932.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/ElephantLondon1934.png\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Jane_Addams.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/AnselAdamsAdobe.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/CricketLondon1930.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Donegal1907Yarn.jpg\", render_factor=32, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/AnselAdamsChurch.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/BreadDelivery1920sIreland.jpg\", render_factor=20, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/BritishTeaBombay1890s.png\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/CafeParis1928.jpg\", render_factor=35, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/BigManTavern1908NYC.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Cars1890sIreland.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/GalwayIreland1902.jpg\", render_factor=35, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/HomeIreland1924.jpg\", render_factor=40, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/HydeParkLondon1920s.jpg\", render_factor=30, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1929LondonOverFleetSt.jpg\", render_factor=25, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/AccordianKid1900Paris.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/AnselAdamsBuildings.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/AthleticClubParis1913.jpg\", render_factor=42, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/BombedLibraryLondon1940.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Boston1937.jpg\", render_factor=30, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/BoulevardDuTemple1838.jpg\", render_factor=25, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/BumperCarsParis1930.jpg\", render_factor=25, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/CafeTerrace1925Paris.jpg\", render_factor=24, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/CoalDeliveryParis1915.jpg\", render_factor=37, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/CorkKids1910.jpg\", render_factor=32, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/DeepSeaDiver1915.png\", render_factor=16, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/EastEndLondonStreetKids1901.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/FreightTrainTeens1934.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/HarrodsLondon1920.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/HerbSeller1899Paris.jpg\", render_factor=17, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/CalcuttaPoliceman1920.jpg\", render_factor=20, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/ElectricScooter1915.jpeg\", render_factor=20, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/GreatGrandparentsIrelandEarly1900s.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/HalloweenEarly1900s.jpg\", render_factor=11, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/IceManLondon1919.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/LeBonMarcheParis1875.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/LittleAirplane1934.jpg\", render_factor=35, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/RoyalUniversityMedStudent1900Ireland.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/LewisTomalinLondon1895.png\", render_factor=25, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/SunHelmetsLondon1933.jpg\", render_factor=40, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Killarney1910.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/LondonSheep1920s.png\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/PostOfficeVermont1914.png\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/ServantsBessboroughHouse1908Ireland.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/WaterfordIreland1909.jpg\", render_factor=35, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Lisbon1919.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/London1918WartimeClothesManufacture.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/LondonHeatWave1935.png\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/LondonsSmallestShop1900.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/MetropolitanDistrictRailway1869London.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/NativeWoman1926.jpg\", render_factor=21, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/PaddysMarketCork1900s.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Paris1920Cart.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/ParisLadies1910.jpg\", render_factor=20, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/ParisLadies1930s.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Sphinx.jpeg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/TheatreGroupBombay1875.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/WorldsFair1900Paris.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/London1850Coach.jpg\", render_factor=25, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/London1900EastEndBlacksmith.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/London1930sCheetah.jpg\", render_factor=42, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/LondonFireBrigadeMember1926.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/LondonGarbageTruck1910.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/LondonRailwayWork1931.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/LondonStreets1900.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/MuffinManlLondon1910.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/NativeCouple1912.jpg\", render_factor=21, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/NewspaperCivilWar1863.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/PaddingtonStationLondon1907.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Paris1899StreetDig.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Paris1926.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/ParisWomenFurs1920s.jpg\", render_factor=21, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/PeddlerParis1899.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/SchoolKidsConnemaraIreland1901.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/SecondHandClothesLondonLate1800s.jpg\", render_factor=33, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/SoapBoxRacerParis1920s.jpg\", render_factor=40, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/SoccerMotorcycles1923London.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/WalkingLibraryLondon1930.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/LondonStreetDoctor1877.png\", render_factor=38, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/jacksonville.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/ZebraCarriageLondon1900.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/StreetGramaphonePlayerLondon1920s.png\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/YaleBranchBarnardsExpress.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/SynagogueInterior.PNG\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/ArmisticeDay1918.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/FlyingMachinesParis1909.jpg\", render_factor=25, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/GreatAunt1920.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/NewBrunswick1915.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/ShoeMakerLate1800s.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/SpottedBull1908.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/TouristsGermany1904.jpg\", render_factor=35, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/TunisianStudents1914.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/Yorktown1862.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/LondonFashion1911.png\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1939GypsyKids.jpg\", render_factor=37, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1936OpiumShanghai.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1923HollandTunnel.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1939YakimaWAGirl.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/GoldenGateConstruction.jpg\", render_factor=35, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/PostCivilWarAncestors.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1939SewingBike.png\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1930MaineSchoolBus.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1913NewYorkConstruction.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1945HiroshimaChild.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1941GeorgiaFarmhouse.jpg\", render_factor=43, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1934UmbriaItaly.jpg\", render_factor=21) ",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1900sLadiesTeaParty.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1919WWIAviationOxygenMask.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1900NJThanksgiving.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1940Connecticut.jpg\", render_factor=43, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1940Connecticut.jpg\", render_factor=i, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1911ThanksgivingMaskers.jpg\", render_factor=35, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1910ThanksgivingMaskersII.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1936PetToad.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1908RookeriesLondon.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1890sChineseImmigrants.jpg\", render_factor=25, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1897VancouverAmberlamps.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1929VictorianCosplayLondon.jpg\", render_factor=35, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1959ParisFriends.png\", render_factor=40, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1925GypsyCampMaryland.jpg\", render_factor=40, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1941PoolTableGeorgia.jpg\", render_factor=45, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1900ParkDog.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1886Hoop.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1950sLondonPoliceChild.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1886ProspectPark.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1930sRooftopPoland.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1919RevereBeach.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1936ParisCafe.jpg\", render_factor=46, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1902FrenchYellowBellies.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1940PAFamily.jpg\", render_factor=42, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1910Finland.jpg\", render_factor=40, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/ZebraCarriageLondon1900.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/1904ChineseMan.jpg\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/CrystalPalaceLondon1854.PNG\", compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/James1.jpg\", render_factor=15, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/James2.jpg\", render_factor=20, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/James3.jpg\", render_factor=19, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/James4.jpg\", render_factor=30, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/James5.jpg\", render_factor=32, compare=True)",
"_____no_output_____"
],
[
"vis.plot_transformed_image(\"test_images/James6.jpg\", render_factor=28, compare=True)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c2188f798a4b112962c48e76d05586ff7f1b30 | 368,625 | ipynb | Jupyter Notebook | codici/gda-lin-sk-cv.ipynb | tvml/ml2122 | 290ac378b19ec5bbdd2094e42e3c39cd91867c9e | [
"MIT"
] | null | null | null | codici/gda-lin-sk-cv.ipynb | tvml/ml2122 | 290ac378b19ec5bbdd2094e42e3c39cd91867c9e | [
"MIT"
] | null | null | null | codici/gda-lin-sk-cv.ipynb | tvml/ml2122 | 290ac378b19ec5bbdd2094e42e3c39cd91867c9e | [
"MIT"
] | null | null | null | 833.993213 | 109,494 | 0.941024 | [
[
[
"Gaussian discriminant analysis con stessa matrice di covarianza per le distribuzioni delle due classi e conseguente separatore lineare. Implementata in scikit-learn. Valutazione con cross validation. ",
"_____no_output_____"
]
],
[
[
"import warnings\nwarnings.filterwarnings('ignore')\n\n%matplotlib inline",
"_____no_output_____"
],
[
"import pandas as pd\nimport numpy as np\nimport scipy.stats as st\nfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis\nfrom sklearn.model_selection import cross_val_score\nimport sklearn.metrics as mt",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport matplotlib.colors as mcolors\n\nplt.style.use('fivethirtyeight')\n\nplt.rcParams['font.family'] = 'sans-serif'\nplt.rcParams['font.serif'] = 'Ubuntu'\nplt.rcParams['font.monospace'] = 'Ubuntu Mono'\nplt.rcParams['font.size'] = 10\nplt.rcParams['axes.labelsize'] = 10\nplt.rcParams['axes.labelweight'] = 'bold'\nplt.rcParams['axes.titlesize'] = 10\nplt.rcParams['xtick.labelsize'] = 8\nplt.rcParams['ytick.labelsize'] = 8\nplt.rcParams['legend.fontsize'] = 10\nplt.rcParams['figure.titlesize'] = 12\nplt.rcParams['image.cmap'] = 'jet'\nplt.rcParams['image.interpolation'] = 'none'\nplt.rcParams['figure.figsize'] = (16, 8)\nplt.rcParams['lines.linewidth'] = 2\nplt.rcParams['lines.markersize'] = 8\n\ncolors = ['#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', '#810f7c', \n'#137e6d', '#be0119', '#3b638c', '#af6f09', '#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', \n'#810f7c', '#137e6d', '#be0119', '#3b638c', '#af6f09']\n\ncmap = mcolors.LinearSegmentedColormap.from_list(\"\", [\"#82cafc\", \"#069af3\", \"#0485d1\", colors[0], colors[8]])",
"_____no_output_____"
]
],
[
[
"Leggiamo i dati da un file csv in un dataframe pandas. I dati hanno 3 valori: i primi due corrispondono alle features e sono assegnati alle colonne x1 e x2 del dataframe; il terzo è il valore target, assegnato alla colonna t. Vengono poi creati una matrice X delle features e un vettore target t",
"_____no_output_____"
]
],
[
[
"# legge i dati in dataframe pandas\ndata = pd.read_csv(\"../../data/ex2data1.txt\", header= None,delimiter=',', names=['x1','x2','t'])\n\n# calcola dimensione dei dati\nn = len(data)\nn0 = len(data[data.t==0])\n\n# calcola dimensionalità delle features\nfeatures = data.columns\nnfeatures = len(features)-1\n\nX = np.array(data[features[:-1]])\nt = np.array(data['t'])\n",
"_____no_output_____"
]
],
[
[
"Visualizza il dataset.",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize=(16,8))\nax = fig.gca()\nax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, color=colors[0], alpha=.7)\nax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)\nplt.xlabel('$x_1$', fontsize=12)\nplt.ylabel('$x_2$', fontsize=12)\nplt.xticks(fontsize=10)\nplt.yticks(fontsize=10)\nplt.title('Dataset', fontsize=12)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Definisce un classificatore basato su GDA quadratica ed effettua il training sul dataset.",
"_____no_output_____"
]
],
[
[
"clf = LinearDiscriminantAnalysis(store_covariance=True)\nclf.fit(X, t)",
"_____no_output_____"
]
],
[
[
"Definiamo la griglia 100x100 da utilizzare per la visualizzazione delle varie distribuzioni.",
"_____no_output_____"
]
],
[
[
"# insieme delle ascisse dei punti\nu = np.linspace(min(X[:,0]), max(X[:,0]), 100)\n# insieme delle ordinate dei punti\nv = np.linspace(min(X[:,1]), max(X[:,1]), 100)\n# deriva i punti della griglia: il punto in posizione i,j nella griglia ha ascissa U(i,j) e ordinata V(i,j)\nU, V = np.meshgrid(u, v)",
"_____no_output_____"
]
],
[
[
"Calcola sui punti della griglia le probabilità delle classi $p(x|C_0), p(x|C_1)$ e le probabilità a posteriori delle classi $p(C_0|x), p(C_1|x)$",
"_____no_output_____"
]
],
[
[
"# probabilità a posteriori delle due distribuzioni sulla griglia\nZ = clf.predict_proba(np.c_[U.ravel(), V.ravel()])\npp0 = Z[:, 0].reshape(U.shape)\npp1 = Z[:, 1].reshape(V.shape)\n# rapporto tra le probabilità a posteriori delle classi per tutti i punti della griglia\nz=pp0/pp1 \n\n# probabilità per le due classi sulla griglia\nmu0 = clf.means_[0]\nmu1 = clf.means_[1]\nsigma = clf.covariance_\nvf0=np.vectorize(lambda x,y:st.multivariate_normal.pdf([x,y],mu0,sigma))\nvf1=np.vectorize(lambda x,y:st.multivariate_normal.pdf([x,y],mu1,sigma))\np0=vf0(U,V)\np1=vf1(U,V)",
"_____no_output_____"
]
],
[
[
"Visualizzazione della distribuzione di $p(x|C_0)$",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize=(16,8))\nax = fig.gca()\n# inserisce una rappresentazione della probabilità della classe C0 sotto forma di heatmap\nimshow_handle = plt.imshow(p0, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)\nplt.contour(U, V, p0, linewidths=[.7], colors=[colors[6]])\n# rappresenta i punti del dataset\nax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)\nax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)\n# rappresenta la media della distribuzione\nax.scatter(mu0[0], mu0[1], s=150,c=colors[3], marker='*', alpha=1)\n# inserisce titoli, etc.\nplt.xlabel('$x_1$', fontsize=12)\nplt.ylabel('$x_2$', fontsize=12)\nplt.xticks(fontsize=10)\nplt.yticks(fontsize=10)\nplt.xlim(u.min(), u.max())\nplt.ylim(v.min(), v.max())\nplt.title('Distribuzione di $p(x|C_0)$', fontsize=12)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Visualizzazione della distribuzione di $p(x|C1)$",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize=(16,8))\nax = fig.gca()\n# inserisce una rappresentazione della probabilità della classe C0 sotto forma di heatmap\nimshow_handle = plt.imshow(p1, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)\nplt.contour(U, V, p1, linewidths=[.7], colors=[colors[6]])\n# rappresenta i punti del dataset\nax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)\nax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)\n# rappresenta la media della distribuzione\nax.scatter(mu1[0], mu1[1], s=150,c=colors[3], marker='*', alpha=1)\n# inserisce titoli, etc.\nplt.xlabel('$x_1$', fontsize=12)\nplt.ylabel('$x_2$', fontsize=12)\nplt.xticks(fontsize=10)\nplt.yticks(fontsize=10)\nplt.xlim(u.min(), u.max())\nplt.ylim(v.min(), v.max())\nplt.title('Distribuzione di $p(x|C_1)$', fontsize=12)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Visualizzazione di $p(C_0|x)$",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize=(8,8))\nax = fig.gca()\nimshow_handle = plt.imshow(pp0, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)\nax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)\nax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)\nplt.contour(U, V, z, [1.0], colors=[colors[7]],linewidths=[1])\nplt.xlabel('$x_1$', fontsize=12)\nplt.ylabel('$x_2$', fontsize=12)\nplt.xticks(fontsize=10)\nplt.yticks(fontsize=10)\nplt.xlim(u.min(), u.max())\nplt.ylim(v.min(), v.max())\nplt.title(\"Distribuzione di $p(C_0|x)$\", fontsize=12)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Visualizzazione di $p(C_1|x)$",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize=(8,8))\nax = fig.gca()\nimshow_handle = plt.imshow(pp1, origin='lower', extent=(min(X[:,0]), max(X[:,0]), min(X[:,1]), max(X[:,1])), alpha=.7)\nax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, c=colors[0], alpha=.7)\nax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40,c=colors[1], alpha=.7)\nplt.contour(U, V, z, [1.0], colors=[colors[7]],linewidths=[1])\nplt.xlabel('$x_1$', fontsize=12)\nplt.ylabel('$x_2$', fontsize=12)\nplt.xticks(fontsize=10)\nplt.yticks(fontsize=10)\nplt.xlim(u.min(), u.max())\nplt.ylim(v.min(), v.max())\nplt.title(\"Distribuzione di $p(C_1|x)$\", fontsize=12)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Applica la cross validation (5-fold) per calcolare l'accuracy effettuando la media sui 5 valori restituiti.",
"_____no_output_____"
]
],
[
[
"print(\"Accuracy: {0:5.3f}\".format(cross_val_score(clf, X, t, cv=5, scoring='accuracy').mean()))",
"Accuracy: 0.870\n"
]
]
] | [
"raw",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"raw"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c221bd0a29c6cae5b20b045e914ea9055567f2 | 825,210 | ipynb | Jupyter Notebook | notebooks/ML Pipeline Preparation.ipynb | inmaugarc/DataScience_Disaster_Response_Pipeline | b83ae016db1931ad9f3ae48ed7c904891d073e02 | [
"CNRI-Python",
"blessing"
] | null | null | null | notebooks/ML Pipeline Preparation.ipynb | inmaugarc/DataScience_Disaster_Response_Pipeline | b83ae016db1931ad9f3ae48ed7c904891d073e02 | [
"CNRI-Python",
"blessing"
] | null | null | null | notebooks/ML Pipeline Preparation.ipynb | inmaugarc/DataScience_Disaster_Response_Pipeline | b83ae016db1931ad9f3ae48ed7c904891d073e02 | [
"CNRI-Python",
"blessing"
] | null | null | null | 192.941314 | 320,200 | 0.845797 | [
[
[
"# ML Pipeline Preparation\nFollow the instructions below to help you create your ML pipeline.\n### 1. Import libraries and load data from database.\n- Import Python libraries\n- Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)\n- Define feature and target variables X and Y",
"_____no_output_____"
]
],
[
[
"# import basic libraries\nimport os\nimport pickle\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns \nfrom sqlalchemy import create_engine\n# import nltk and text processing (like regular expresion) libraries\nimport nltk\nnltk.download(['punkt', 'wordnet','stopwords'])\nfrom nltk.tokenize import word_tokenize\nfrom nltk.corpus import stopwords\nfrom nltk.stem.wordnet import WordNetLemmatizer\nimport re\n\n# import libraries for transformation\nfrom sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer\n\n# import machine learning libraries\nfrom sklearn.datasets import make_multilabel_classification\nfrom sklearn.multioutput import MultiOutputClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.metrics import confusion_matrix, classification_report, accuracy_score, fbeta_score, make_scorer\nfrom sklearn.model_selection import train_test_split,GridSearchCV\nfrom sklearn.ensemble import RandomForestClassifier\nstop_words = stopwords.words(\"english\")\nlemmatizer = WordNetLemmatizer()",
"[nltk_data] Downloading package punkt to /root/nltk_data...\n[nltk_data] Package punkt is already up-to-date!\n[nltk_data] Downloading package wordnet to /root/nltk_data...\n[nltk_data] Package wordnet is already up-to-date!\n[nltk_data] Downloading package stopwords to /root/nltk_data...\n[nltk_data] Package stopwords is already up-to-date!\n"
],
[
"# load data from database\ndb_file = \"./DisasterResponse.db\"\n# create the connection to the DB\nengine = create_engine('sqlite:///DisasterResponse.db')\n# prepare a table name\ntable_name = os.path.basename(db_file).replace(\".db\",\"\")\n# load the info from the sql table into a pandas file\ndf = pd.read_sql_table(table_name,engine)",
"_____no_output_____"
]
],
[
[
"# Exploratory Data Analysis (EDA)",
"_____no_output_____"
],
[
"Let's do some Exploratory Data Analysis.\nFirst of all we'll see an overview of the dataset",
"_____no_output_____"
]
],
[
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 26216 entries, 0 to 26215\nData columns (total 40 columns):\nid 26216 non-null int64\nmessage 26216 non-null object\noriginal 10170 non-null object\ngenre 26216 non-null object\nrelated 26216 non-null int64\nrequest 26216 non-null int64\noffer 26216 non-null int64\naid_related 26216 non-null int64\nmedical_help 26216 non-null int64\nmedical_products 26216 non-null int64\nsearch_and_rescue 26216 non-null int64\nsecurity 26216 non-null int64\nmilitary 26216 non-null int64\nchild_alone 26216 non-null int64\nwater 26216 non-null int64\nfood 26216 non-null int64\nshelter 26216 non-null int64\nclothing 26216 non-null int64\nmoney 26216 non-null int64\nmissing_people 26216 non-null int64\nrefugees 26216 non-null int64\ndeath 26216 non-null int64\nother_aid 26216 non-null int64\ninfrastructure_related 26216 non-null int64\ntransport 26216 non-null int64\nbuildings 26216 non-null int64\nelectricity 26216 non-null int64\ntools 26216 non-null int64\nhospitals 26216 non-null int64\nshops 26216 non-null int64\naid_centers 26216 non-null int64\nother_infrastructure 26216 non-null int64\nweather_related 26216 non-null int64\nfloods 26216 non-null int64\nstorm 26216 non-null int64\nfire 26216 non-null int64\nearthquake 26216 non-null int64\ncold 26216 non-null int64\nother_weather 26216 non-null int64\ndirect_report 26216 non-null int64\ndtypes: int64(37), object(3)\nmemory usage: 8.0+ MB\n"
],
[
"df.describe()",
"_____no_output_____"
]
],
[
[
"We can see there are 40 columns and 26216 entries and our dataset has a memory usage: of 8.0+ MB\nThe dataset is pretty complete, with almost all its values (non-null).\nThere are three string fields and the rest of fields are of type integer\nIf we look at the fields, the major part of them are values between [0-1] except the \"id\" field and the \"related\" field whose values are between [0-2].\nAs it was stated, all values have to be binary, so we are going to explore this field",
"_____no_output_____"
],
[
"Now let's see how many records of every \"related\" field are. \nWe will use these query for the final visualization part",
"_____no_output_____"
]
],
[
[
"df.groupby(\"related\").count()",
"_____no_output_____"
],
[
"df[df['related'] ==2].head()",
"_____no_output_____"
],
[
"df[df['related'] ==2].describe()",
"_____no_output_____"
],
[
"df[df['related'] ==0].describe()",
"_____no_output_____"
],
[
"df[df['related'] ==1].describe()",
"_____no_output_____"
]
],
[
[
"After exploring this field, there is not much information. \nOnly that there are few entries of related field with value=2 compare to the rest of the values.\nWith the related value=2 we have two ways of working:\n 1. We could impute them with another value, for instance value=1 that is the most often\n 2. Drop them\nAnd I will drop them",
"_____no_output_____"
]
],
[
[
"df = df[df.related !=2]",
"_____no_output_____"
]
],
[
[
"And we'll check",
"_____no_output_____"
]
],
[
[
"df['related'].describe()",
"_____no_output_____"
],
[
"df.groupby(\"related\").count()",
"_____no_output_____"
]
],
[
[
"Now we'll check the 10 first lines of the dataset",
"_____no_output_____"
]
],
[
[
"df.head(10)",
"_____no_output_____"
]
],
[
[
"## Pearson correlation between variables\nLet's build a heatmap, to see the correlation of each variable",
"_____no_output_____"
]
],
[
[
"data = df.copy()\nf,ax = plt.subplots(figsize=(15, 15))\nsns.heatmap(data.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax)\nplt.show()",
"_____no_output_____"
]
],
[
[
"From this Pearson map, we clearly see there is a strange column \"child_alone\"\nLet's explore more in detail ",
"_____no_output_____"
]
],
[
[
"df.groupby(\"child_alone\").count()",
"_____no_output_____"
]
],
[
[
"We see that column only contains zero values so we are going to delete them, as it doesn't bring us much help",
"_____no_output_____"
]
],
[
[
"df = df.drop([\"child_alone\"],axis=1)",
"_____no_output_____"
]
],
[
[
"Let's see the Pearson again",
"_____no_output_____"
]
],
[
[
"data = df.copy()\nf,ax = plt.subplots(figsize=(15, 15))\nsns.heatmap(data.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax)\nplt.show()",
"_____no_output_____"
]
],
[
[
"We have a glimpse of the data!!",
"_____no_output_____"
],
[
"Now, we will start with the Machine Learning tasks",
"_____no_output_____"
]
],
[
[
"# We separate the features from the variables we are going to predict\nX = df ['message']\ny = df.drop(columns = ['id', 'message', 'original', 'genre'])",
"_____no_output_____"
]
],
[
[
"### 2. Tokenization function to process your text data",
"_____no_output_____"
]
],
[
[
"def tokenize(text):\n # normalize case and remove punctuation\n text = re.sub(r\"[^a-zA-Z0-9]\", \" \", text.lower())\n \n # tokenize text\n tokens = word_tokenize(text)\n \n # lemmatize andremove stop words\n tokens = [lemmatizer.lemmatize(word) for word in tokens if word not in stop_words]\n\n return tokens",
"_____no_output_____"
]
],
[
[
"### 3. Build a machine learning pipeline\nThis machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.",
"_____no_output_____"
]
],
[
[
"pipeline = Pipeline([\n ('vect', CountVectorizer(tokenizer=tokenize)),\n ('tfidf', TfidfTransformer()),\n ('clf', MultiOutputClassifier (RandomForestClassifier())) \n ])",
"_____no_output_____"
]
],
[
[
"### 4. Train pipeline\n- Split data into train and test sets\n- Train pipeline",
"_____no_output_____"
]
],
[
[
"X_train, X_test, y_train, y_test = train_test_split(X, y)\n\n# train classifier\npipeline.fit(X_train, y_train) ",
"_____no_output_____"
]
],
[
[
"### 5. Test your model\nReport the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.",
"_____no_output_____"
]
],
[
[
"# predict on test data\ny_pred = pipeline.predict(X_test)",
"_____no_output_____"
],
[
"# we check dimensions\nX_train.shape, y_train.shape, y.shape, X.shape",
"_____no_output_____"
],
[
"# and print metrics\naccuracy = (y_pred == y_test).mean()\nprint(\"Accuracy:\", accuracy, \"\\n\")\n\ncategory_names = list(y.columns)\n\nfor i in range(len(category_names)):\n print(\"Output Category:\", category_names[i],\"\\n\", classification_report(y_test.iloc[:, i].values, y_pred[:, i]))\n print('Accuracy of %25s: %.2f' %(category_names[i], accuracy_score(y_test.iloc[:, i].values, y_pred[:,i])))\n",
"Accuracy: related 0.805920\nrequest 0.884803\noffer 0.994355\naid_related 0.763198\nmedical_help 0.922338\nmedical_products 0.950717\nsearch_and_rescue 0.973451\nsecurity 0.982606\nmilitary 0.972078\nwater 0.948581\nfood 0.932865\nshelter 0.937138\nclothing 0.989472\nmoney 0.979402\nmissing_people 0.987489\nrefugees 0.964602\ndeath 0.958956\nother_aid 0.870613\ninfrastructure_related 0.932408\ntransport 0.955294\nbuildings 0.956057\nelectricity 0.980623\ntools 0.994202\nhospitals 0.988709\nshops 0.994965\naid_centers 0.988099\nother_infrastructure 0.956515\nweather_related 0.866341\nfloods 0.944919\nstorm 0.935307\nfire 0.990082\nearthquake 0.970247\ncold 0.980317\nother_weather 0.950565\ndirect_report 0.844675\ndtype: float64 \n\nOutput Category: related \n precision recall f1-score support\n\n 0 0.63 0.47 0.54 1582\n 1 0.84 0.91 0.88 4972\n\navg / total 0.79 0.81 0.79 6554\n\nAccuracy of related: 0.81\nOutput Category: request \n precision recall f1-score support\n\n 0 0.90 0.97 0.93 5425\n 1 0.78 0.47 0.58 1129\n\navg / total 0.88 0.88 0.87 6554\n\nAccuracy of request: 0.88\nOutput Category: offer \n precision recall f1-score support\n\n 0 0.99 1.00 1.00 6517\n 1 0.00 0.00 0.00 37\n\navg / total 0.99 0.99 0.99 6554\n\nAccuracy of offer: 0.99\nOutput Category: aid_related \n precision recall f1-score support\n\n 0 0.77 0.86 0.81 3877\n 1 0.75 0.63 0.68 2677\n\navg / total 0.76 0.76 0.76 6554\n\nAccuracy of aid_related: 0.76\nOutput Category: medical_help \n precision recall f1-score support\n\n 0 0.93 1.00 0.96 6040\n 1 0.54 0.07 0.12 514\n\navg / total 0.90 0.92 0.89 6554\n\nAccuracy of medical_help: 0.92\nOutput Category: medical_products \n precision recall f1-score support\n\n 0 0.95 1.00 0.97 6224\n 1 0.61 0.06 0.11 330\n\navg / total 0.94 0.95 0.93 6554\n\nAccuracy of medical_products: 0.95\nOutput Category: search_and_rescue \n precision recall f1-score support\n\n 0 0.97 1.00 0.99 6370\n 1 0.73 0.09 0.16 184\n\navg / total 0.97 0.97 0.96 6554\n\nAccuracy of search_and_rescue: 0.97\nOutput Category: security \n precision recall f1-score support\n\n 0 0.98 1.00 0.99 6443\n 1 0.00 0.00 0.00 111\n\navg / total 0.97 0.98 0.97 6554\n\nAccuracy of security: 0.98\nOutput Category: military \n precision recall f1-score support\n\n 0 0.97 1.00 0.99 6363\n 1 0.64 0.09 0.16 191\n\navg / total 0.96 0.97 0.96 6554\n\nAccuracy of military: 0.97\nOutput Category: water \n precision recall f1-score support\n\n 0 0.95 1.00 0.97 6126\n 1 0.82 0.27 0.41 428\n\navg / total 0.94 0.95 0.94 6554\n\nAccuracy of water: 0.95\nOutput Category: food \n precision recall f1-score support\n\n 0 0.94 0.99 0.96 5834\n 1 0.81 0.51 0.63 720\n\navg / total 0.93 0.93 0.93 6554\n\nAccuracy of food: 0.93\nOutput Category: shelter \n precision recall f1-score support\n\n 0 0.94 0.99 0.97 5999\n 1 0.80 0.34 0.48 555\n\navg / total 0.93 0.94 0.93 6554\n\nAccuracy of shelter: 0.94\nOutput Category: clothing \n precision recall f1-score support\n\n 0 0.99 1.00 0.99 6474\n 1 0.82 0.17 0.29 80\n\navg / total 0.99 0.99 0.99 6554\n\nAccuracy of clothing: 0.99\nOutput Category: money \n precision recall f1-score support\n\n 0 0.98 1.00 0.99 6414\n 1 0.86 0.04 0.08 140\n\navg / total 0.98 0.98 0.97 6554\n\nAccuracy of money: 0.98\nOutput Category: missing_people \n precision recall f1-score support\n\n 0 0.99 1.00 0.99 6471\n 1 1.00 0.01 0.02 83\n\navg / total 0.99 0.99 0.98 6554\n\nAccuracy of missing_people: 0.99\nOutput Category: refugees \n precision recall f1-score support\n\n 0 0.97 1.00 0.98 6325\n 1 0.36 0.02 0.03 229\n\navg / total 0.94 0.96 0.95 6554\n\nAccuracy of refugees: 0.96\nOutput Category: death \n precision recall f1-score support\n\n 0 0.96 1.00 0.98 6249\n 1 0.80 0.16 0.26 305\n\navg / total 0.95 0.96 0.95 6554\n\nAccuracy of death: 0.96\nOutput Category: other_aid \n precision recall f1-score support\n\n 0 0.87 0.99 0.93 5700\n 1 0.54 0.04 0.08 854\n\navg / total 0.83 0.87 0.82 6554\n\nAccuracy of other_aid: 0.87\nOutput Category: infrastructure_related \n precision recall f1-score support\n\n 0 0.93 1.00 0.97 6125\n 1 0.06 0.00 0.00 429\n\navg / total 0.88 0.93 0.90 6554\n\nAccuracy of infrastructure_related: 0.93\nOutput Category: transport \n precision recall f1-score support\n\n 0 0.96 1.00 0.98 6253\n 1 0.62 0.07 0.13 301\n\navg / total 0.94 0.96 0.94 6554\n\nAccuracy of transport: 0.96\nOutput Category: buildings \n precision recall f1-score support\n\n 0 0.96 1.00 0.98 6245\n 1 0.78 0.09 0.17 309\n\navg / total 0.95 0.96 0.94 6554\n\nAccuracy of buildings: 0.96\nOutput Category: electricity \n precision recall f1-score support\n\n 0 0.98 1.00 0.99 6424\n 1 0.71 0.04 0.07 130\n\navg / total 0.98 0.98 0.97 6554\n\nAccuracy of electricity: 0.98\nOutput Category: tools \n precision recall f1-score support\n\n 0 0.99 1.00 1.00 6516\n 1 0.00 0.00 0.00 38\n\navg / total 0.99 0.99 0.99 6554\n\nAccuracy of tools: 0.99\nOutput Category: hospitals \n precision recall f1-score support\n\n 0 0.99 1.00 0.99 6480\n 1 0.00 0.00 0.00 74\n\navg / total 0.98 0.99 0.98 6554\n\nAccuracy of hospitals: 0.99\nOutput Category: shops \n precision recall f1-score support\n\n 0 0.99 1.00 1.00 6521\n 1 0.00 0.00 0.00 33\n\navg / total 0.99 0.99 0.99 6554\n\nAccuracy of shops: 0.99\nOutput Category: aid_centers \n precision recall f1-score support\n\n 0 0.99 1.00 0.99 6476\n 1 0.00 0.00 0.00 78\n\navg / total 0.98 0.99 0.98 6554\n\nAccuracy of aid_centers: 0.99\nOutput Category: other_infrastructure \n precision recall f1-score support\n\n 0 0.96 1.00 0.98 6271\n 1 0.00 0.00 0.00 283\n\navg / total 0.92 0.96 0.94 6554\n\nAccuracy of other_infrastructure: 0.96\nOutput Category: weather_related \n precision recall f1-score support\n\n 0 0.87 0.95 0.91 4746\n 1 0.84 0.64 0.72 1808\n\navg / total 0.86 0.87 0.86 6554\n\nAccuracy of weather_related: 0.87\nOutput Category: floods \n precision recall f1-score support\n\n 0 0.95 1.00 0.97 6039\n 1 0.87 0.35 0.50 515\n\navg / total 0.94 0.94 0.93 6554\n\nAccuracy of floods: 0.94\nOutput Category: storm \n precision recall f1-score support\n\n 0 0.95 0.98 0.97 5939\n 1 0.76 0.46 0.57 615\n\navg / total 0.93 0.94 0.93 6554\n\nAccuracy of storm: 0.94\nOutput Category: fire \n precision recall f1-score support\n\n 0 0.99 1.00 1.00 6488\n 1 1.00 0.02 0.03 66\n\navg / total 0.99 0.99 0.99 6554\n\nAccuracy of fire: 0.99\nOutput Category: earthquake \n precision recall f1-score support\n\n 0 0.98 0.99 0.98 5954\n 1 0.90 0.76 0.82 600\n\navg / total 0.97 0.97 0.97 6554\n\nAccuracy of earthquake: 0.97\nOutput Category: cold \n precision recall f1-score support\n\n 0 0.98 1.00 0.99 6416\n 1 0.67 0.13 0.22 138\n\navg / total 0.97 0.98 0.97 6554\n\nAccuracy of cold: 0.98\nOutput Category: other_weather \n precision recall f1-score support\n\n 0 0.95 1.00 0.97 6231\n 1 0.48 0.03 0.06 323\n\navg / total 0.93 0.95 0.93 6554\n\nAccuracy of other_weather: 0.95\nOutput Category: direct_report \n precision recall f1-score support\n\n 0 0.86 0.97 0.91 5271\n 1 0.73 0.32 0.45 1283\n\navg / total 0.83 0.84 0.82 6554\n\nAccuracy of direct_report: 0.84\n"
]
],
[
[
"### 6. Improve your model\nUse grid search to find better parameters. ",
"_____no_output_____"
]
],
[
[
"base = Pipeline([\n ('vect', CountVectorizer(tokenizer=tokenize)),\n ('tfidf', TfidfTransformer()),\n ('clf', MultiOutputClassifier(RandomForestClassifier()))\n ])\n \nparameters = {'clf__estimator__n_estimators': [10, 20],\n 'clf__estimator__max_depth': [2, 5],\n 'clf__estimator__min_samples_split': [2, 3, 4],\n 'clf__estimator__criterion': ['entropy']\n }\n \ncv = GridSearchCV(base, param_grid=parameters, n_jobs=-1, cv=2, verbose=3)",
"_____no_output_____"
],
[
"cv.fit(X_train, y_train)",
"Fitting 2 folds for each of 12 candidates, totalling 24 fits\n[CV] clf__estimator__criterion=entropy, clf__estimator__max_depth=2, clf__estimator__min_samples_split=2, clf__estimator__n_estimators=10 \n[CV] clf__estimator__criterion=entropy, clf__estimator__max_depth=2, clf__estimator__min_samples_split=2, clf__estimator__n_estimators=10, score=0.20160716102125928, total= 10.1s\n[CV] clf__estimator__criterion=entropy, clf__estimator__max_depth=2, clf__estimator__min_samples_split=2, clf__estimator__n_estimators=10 \n"
]
],
[
[
"### 7. Test your model\nShow the accuracy, precision, and recall of the tuned model. \n\nSince this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!",
"_____no_output_____"
]
],
[
[
"y_pred = cv.predict(X_test)",
"_____no_output_____"
],
[
"accuracy = (y_pred == y_test).mean()\nprint(\"Accuracy:\", accuracy, \"\\n\")\nfor i in range(len(category_names)):\n print(\"Output Category:\", category_names[i],\"\\n\", classification_report(y_test.iloc[:, i].values, y_pred[:, i]))\n print('Accuracy of %25s: %.2f' %(category_names[i], accuracy_score(y_test.iloc[:, i].values, y_pred[:,i])))\n",
"Accuracy: related 0.758621\nrequest 0.827739\noffer 0.994355\naid_related 0.591700\nmedical_help 0.921575\nmedical_products 0.949649\nsearch_and_rescue 0.971926\nsecurity 0.983064\nmilitary 0.970857\nwater 0.934696\nfood 0.890143\nshelter 0.915319\nclothing 0.987794\nmoney 0.978639\nmissing_people 0.987336\nrefugees 0.965060\ndeath 0.953464\nother_aid 0.869698\ninfrastructure_related 0.934544\ntransport 0.954074\nbuildings 0.952853\nelectricity 0.980165\ntools 0.994202\nhospitals 0.988709\nshops 0.994965\naid_centers 0.988099\nother_infrastructure 0.956820\nweather_related 0.724291\nfloods 0.921422\nstorm 0.906164\nfire 0.989930\nearthquake 0.908453\ncold 0.978944\nother_weather 0.950717\ndirect_report 0.804242\ndtype: float64 \n\nOutput Category: related \n precision recall f1-score support\n\n 0 0.00 0.00 0.00 1582\n 1 0.76 1.00 0.86 4972\n\navg / total 0.58 0.76 0.65 6554\n\nAccuracy of related: 0.76\nOutput Category: request \n precision recall f1-score support\n\n 0 0.83 1.00 0.91 5425\n 1 0.00 0.00 0.00 1129\n\navg / total 0.69 0.83 0.75 6554\n\nAccuracy of request: 0.83\nOutput Category: offer \n precision recall f1-score support\n\n 0 0.99 1.00 1.00 6517\n 1 0.00 0.00 0.00 37\n\navg / total 0.99 0.99 0.99 6554\n\nAccuracy of offer: 0.99\nOutput Category: aid_related \n precision recall f1-score support\n\n 0 0.59 1.00 0.74 3877\n 1 1.00 0.00 0.00 2677\n\navg / total 0.76 0.59 0.44 6554\n\nAccuracy of aid_related: 0.59\nOutput Category: medical_help \n precision recall f1-score support\n\n 0 0.92 1.00 0.96 6040\n 1 0.00 0.00 0.00 514\n\navg / total 0.85 0.92 0.88 6554\n\nAccuracy of medical_help: 0.92\nOutput Category: medical_products \n precision recall f1-score support\n\n 0 0.95 1.00 0.97 6224\n 1 0.00 0.00 0.00 330\n\navg / total 0.90 0.95 0.93 6554\n\nAccuracy of medical_products: 0.95\nOutput Category: search_and_rescue \n precision recall f1-score support\n\n 0 0.97 1.00 0.99 6370\n 1 0.00 0.00 0.00 184\n\navg / total 0.94 0.97 0.96 6554\n\nAccuracy of search_and_rescue: 0.97\nOutput Category: security \n precision recall f1-score support\n\n 0 0.98 1.00 0.99 6443\n 1 0.00 0.00 0.00 111\n\navg / total 0.97 0.98 0.97 6554\n\nAccuracy of security: 0.98\nOutput Category: military \n precision recall f1-score support\n\n 0 0.97 1.00 0.99 6363\n 1 0.00 0.00 0.00 191\n\navg / total 0.94 0.97 0.96 6554\n\nAccuracy of military: 0.97\nOutput Category: water \n precision recall f1-score support\n\n 0 0.93 1.00 0.97 6126\n 1 0.00 0.00 0.00 428\n\navg / total 0.87 0.93 0.90 6554\n\nAccuracy of water: 0.93\nOutput Category: food \n precision recall f1-score support\n\n 0 0.89 1.00 0.94 5834\n 1 0.00 0.00 0.00 720\n\navg / total 0.79 0.89 0.84 6554\n\nAccuracy of food: 0.89\nOutput Category: shelter \n precision recall f1-score support\n\n 0 0.92 1.00 0.96 5999\n 1 0.00 0.00 0.00 555\n\navg / total 0.84 0.92 0.87 6554\n\nAccuracy of shelter: 0.92\nOutput Category: clothing \n precision recall f1-score support\n\n 0 0.99 1.00 0.99 6474\n 1 0.00 0.00 0.00 80\n\navg / total 0.98 0.99 0.98 6554\n\nAccuracy of clothing: 0.99\nOutput Category: money \n precision recall f1-score support\n\n 0 0.98 1.00 0.99 6414\n 1 0.00 0.00 0.00 140\n\navg / total 0.96 0.98 0.97 6554\n\nAccuracy of money: 0.98\nOutput Category: missing_people \n precision recall f1-score support\n\n 0 0.99 1.00 0.99 6471\n 1 0.00 0.00 0.00 83\n\navg / total 0.97 0.99 0.98 6554\n\nAccuracy of missing_people: 0.99\nOutput Category: refugees \n precision recall f1-score support\n\n 0 0.97 1.00 0.98 6325\n 1 0.00 0.00 0.00 229\n\navg / total 0.93 0.97 0.95 6554\n\nAccuracy of refugees: 0.97\nOutput Category: death \n precision recall f1-score support\n\n 0 0.95 1.00 0.98 6249\n 1 0.00 0.00 0.00 305\n\navg / total 0.91 0.95 0.93 6554\n\nAccuracy of death: 0.95\nOutput Category: other_aid \n precision recall f1-score support\n\n 0 0.87 1.00 0.93 5700\n 1 0.00 0.00 0.00 854\n\navg / total 0.76 0.87 0.81 6554\n\nAccuracy of other_aid: 0.87\nOutput Category: infrastructure_related \n precision recall f1-score support\n\n 0 0.93 1.00 0.97 6125\n 1 0.00 0.00 0.00 429\n\navg / total 0.87 0.93 0.90 6554\n\nAccuracy of infrastructure_related: 0.93\nOutput Category: transport \n precision recall f1-score support\n\n 0 0.95 1.00 0.98 6253\n 1 0.00 0.00 0.00 301\n\navg / total 0.91 0.95 0.93 6554\n\nAccuracy of transport: 0.95\nOutput Category: buildings \n precision recall f1-score support\n\n 0 0.95 1.00 0.98 6245\n 1 0.00 0.00 0.00 309\n\navg / total 0.91 0.95 0.93 6554\n\nAccuracy of buildings: 0.95\nOutput Category: electricity \n precision recall f1-score support\n\n 0 0.98 1.00 0.99 6424\n 1 0.00 0.00 0.00 130\n\navg / total 0.96 0.98 0.97 6554\n\nAccuracy of electricity: 0.98\nOutput Category: tools \n precision recall f1-score support\n\n 0 0.99 1.00 1.00 6516\n 1 0.00 0.00 0.00 38\n\navg / total 0.99 0.99 0.99 6554\n\nAccuracy of tools: 0.99\nOutput Category: hospitals \n precision recall f1-score support\n\n 0 0.99 1.00 0.99 6480\n 1 0.00 0.00 0.00 74\n\navg / total 0.98 0.99 0.98 6554\n\nAccuracy of hospitals: 0.99\nOutput Category: shops \n precision recall f1-score support\n\n 0 0.99 1.00 1.00 6521\n 1 0.00 0.00 0.00 33\n\navg / total 0.99 0.99 0.99 6554\n\nAccuracy of shops: 0.99\nOutput Category: aid_centers \n precision recall f1-score support\n\n 0 0.99 1.00 0.99 6476\n 1 0.00 0.00 0.00 78\n\navg / total 0.98 0.99 0.98 6554\n\nAccuracy of aid_centers: 0.99\nOutput Category: other_infrastructure \n precision recall f1-score support\n\n 0 0.96 1.00 0.98 6271\n 1 0.00 0.00 0.00 283\n\navg / total 0.92 0.96 0.94 6554\n\nAccuracy of other_infrastructure: 0.96\nOutput Category: weather_related \n precision recall f1-score support\n\n 0 0.72 1.00 0.84 4746\n 1 1.00 0.00 0.00 1808\n\navg / total 0.80 0.72 0.61 6554\n\nAccuracy of weather_related: 0.72\nOutput Category: floods \n precision recall f1-score support\n\n 0 0.92 1.00 0.96 6039\n 1 0.00 0.00 0.00 515\n\navg / total 0.85 0.92 0.88 6554\n\nAccuracy of floods: 0.92\nOutput Category: storm \n precision recall f1-score support\n\n 0 0.91 1.00 0.95 5939\n 1 0.00 0.00 0.00 615\n\navg / total 0.82 0.91 0.86 6554\n\nAccuracy of storm: 0.91\nOutput Category: fire \n precision recall f1-score support\n\n 0 0.99 1.00 0.99 6488\n 1 0.00 0.00 0.00 66\n\navg / total 0.98 0.99 0.98 6554\n\nAccuracy of fire: 0.99\nOutput Category: earthquake \n precision recall f1-score support\n\n 0 0.91 1.00 0.95 5954\n 1 0.00 0.00 0.00 600\n\navg / total 0.83 0.91 0.86 6554\n\nAccuracy of earthquake: 0.91\nOutput Category: cold \n precision recall f1-score support\n\n 0 0.98 1.00 0.99 6416\n 1 0.00 0.00 0.00 138\n\navg / total 0.96 0.98 0.97 6554\n\nAccuracy of cold: 0.98\nOutput Category: other_weather \n precision recall f1-score support\n\n 0 0.95 1.00 0.97 6231\n 1 0.00 0.00 0.00 323\n\navg / total 0.90 0.95 0.93 6554\n\nAccuracy of other_weather: 0.95\nOutput Category: direct_report \n precision recall f1-score support\n\n 0 0.80 1.00 0.89 5271\n 1 0.00 0.00 0.00 1283\n\navg / total 0.65 0.80 0.72 6554\n\nAccuracy of direct_report: 0.80\n"
]
],
[
[
"### 8. Try improving your model further. Here are a few ideas:\n* try other machine learning algorithms\n* add other features besides the TF-IDF",
"_____no_output_____"
]
],
[
[
"from sklearn.decomposition import TruncatedSVD\nimport sklearn\n\nbase = Pipeline([\n ('vect',CountVectorizer(tokenizer=tokenize)),\n ('tfidf', TfidfTransformer()),\n ('lsa', TruncatedSVD(random_state=42, n_components=100)),\n # ('clf', MultiOutputClassifier(sklearn.svm.SVC(random_state=42, class_weight='balanced', gamma='scale')))\n ('clf', MultiOutputClassifier(sklearn.svm.SVC(random_state=42)))\n ])\n\n# SVC parameters\nparameters = {'clf__estimator__kernel': ['linear', 'rbf'],\n 'clf__estimator__C': [0.1, 1, 5]\n }\n\ncv = GridSearchCV(base, param_grid=parameters, n_jobs=-1, cv=2, scoring='f1_samples',verbose=3)",
"_____no_output_____"
],
[
"cv.fit(X_train, y_train)",
"Fitting 2 folds for each of 6 candidates, totalling 12 fits\n[CV] clf__estimator__C=0.1, clf__estimator__kernel=linear ............\n"
],
[
"y_pred = cv.predict(X_test)",
"_____no_output_____"
],
[
"accuracy = (y_pred == y_test).mean()\nprint(\"Accuracy:\", accuracy, \"\\n\")\n\ncategory_names = list(y.columns)\n\nfor i in range(len(category_names)):\n print(\"Output Category:\", category_names[i],\"\\n\", classification_report(y_test.iloc[:, i].values, y_pred[:, i]))\n print('Accuracy of %25s: %.2f' %(category_names[i], accuracy_score(y_test.iloc[:, i].values, y_pred[:,i])))",
"_____no_output_____"
]
],
[
[
"### 9. Export your model as a pickle file",
"_____no_output_____"
]
],
[
[
"def save_model(model, model_filepath):\n \"\"\"\n Save the model as a pickle file:\n\n This procedure saves the model as a pickle file\n\n Args: model, X set, y set\n\n Returns:\n nothing, it runs the model and it displays accuracy metrics\n \"\"\"\n try: \n pickle.dump(model, open(model_filepath, 'wb'))\n except:\n print(\"Error saving the model as a {} pickle file\".format(model_filepath))",
"_____no_output_____"
],
[
"save_model(cv,\"classifier2.pkl\")",
"_____no_output_____"
],
[
"pickle.dump(cv, open(\"classifier2.pkl\", 'wb'))",
"_____no_output_____"
]
],
[
[
"### 10. Use this notebook to complete `train.py`\nUse the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.",
"_____no_output_____"
],
[
"# Refactoring",
"_____no_output_____"
]
],
[
[
" def load_data(db_file):\n \"\"\"\n Load data function\n \n This method receives a database file on a path and it loads data \n from that database file into a pandas datafile.\n It also splits the data into X and y (X: features to work and y: labels to predict) \n It returns two sets of data: X and y\n \n Args: \n db_file (str): Filepath where database is stored.\n \n Returns: \n X (DataFrame): Feature columns\n y (DataFrame): Label columns\n \n \"\"\"\n # load data from database\n # db_file = \"./CleanDisasterResponse.db\"\n # create the connection to the DB\n engine = create_engine('sqlite:///{}'.format(db_file))\n table_name = os.path.basename(db_file).replace(\".db\",\"\")\n # load the info from the sql table into a pandas file\n df = pd.read_sql_table(table_name,engine)\n # We separate the features from the variables we are going to predict\n X = df ['message']\n y = df.drop(columns = ['id', 'message', 'original', 'genre'])\n\n return X, y",
"_____no_output_____"
],
[
"def display_results(y_test, y_pred):\n labels = np.unique(y_pred)\n confusion_mat = confusion_matrix(y_test, y_pred, labels=labels)\n accuracy = (y_pred == y_test).mean()\n\n print(\"Labels:\", labels)\n print(\"Confusion Matrix:\\n\", confusion_mat)\n print(\"Accuracy:\", accuracy)",
"_____no_output_____"
],
[
"A,b = load_data(\"./CleanDisasterResponse.db\")",
"_____no_output_____"
],
[
"def tokenize(text):\n \"\"\"\n Tokenize function\n \n Args:\n This method receives a text and it tokenizes it \n \n Returns: a set of tokens\n \n \"\"\"\n # normalize case and remove punctuation\n text = re.sub(r\"[^a-zA-Z0-9]\", \" \", text.lower())\n \n # tokenize text\n tokens = word_tokenize(text)\n \n # lemmatize andremove stop words\n tokens = [lemmatizer.lemmatize(word) for word in tokens if word not in stop_words]\n\n return tokens",
"_____no_output_____"
],
[
"def save_model(model, model_filepath):\n \"\"\"\n Save the model as a pickle file:\n\n This procedure saves the model as a pickle file\n\n Args: model, X set, y set\n\n Returns:\n nothing, it runs the model and it displays accuracy metrics\n \"\"\"\n pickle.dump(model, open(model_filepath, 'wb'))",
"_____no_output_____"
]
],
[
[
"And now let's try some queries to show interesting data on the html web",
"_____no_output_____"
]
],
[
[
"top_10_mes = df.iloc[:,5:].sum().sort_values(ascending=False)[0:10]\ntop_10_mes\ntop_10_mes_names = list(top_10_mes.index)\ntop_10_mes_names",
"_____no_output_____"
],
[
"mes_categories = df.columns[5:-1]\nmes_categories",
"_____no_output_____"
],
[
"mes_categories_count = df[mes_categories].sum()\nmes_categories_count",
"_____no_output_____"
],
[
"bottom_10_mes = df.iloc[:,5:].sum().sort_values()[0:10]\nbottom_10_mes",
"_____no_output_____"
],
[
"bottom_10_mes_names = list(bottom_10_mes.index)\nbottom_10_mes_names",
"_____no_output_____"
],
[
"distr_class_1 = df.drop(['id', 'message', 'original', 'genre'], axis = 1).sum()/len(df)",
"_____no_output_____"
],
[
"distr_class_1 = distr_class_1.sort_values(ascending = False)",
"_____no_output_____"
],
[
"distr_class_0 = 1 - distr_class_1",
"_____no_output_____"
],
[
"distr_class_names = list(distr_class_1.index)",
"_____no_output_____"
],
[
"list(distr_class_1.index)",
"_____no_output_____"
],
[
"distr_class_1 = df.drop(['id', 'message', 'original', 'genre'], axis = 1).sum()/len(df)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c2287df2ab4fe97120a8ac4971a43ff8fb0ea5 | 46,659 | ipynb | Jupyter Notebook | python/d2l-en/mxnet/chapter_recommender-systems/autorec.ipynb | rtp-aws/devpost_aws_disaster_recovery | 2ccfff2d8b85614f3043f09d98c9981dedf43c05 | [
"MIT"
] | 1 | 2022-01-13T23:36:05.000Z | 2022-01-13T23:36:05.000Z | python/d2l-en/mxnet/chapter_recommender-systems/autorec.ipynb | rtp-aws/devpost_aws_disaster_recovery | 2ccfff2d8b85614f3043f09d98c9981dedf43c05 | [
"MIT"
] | 9 | 2022-01-13T19:34:34.000Z | 2022-01-14T19:41:18.000Z | python/d2l-en/mxnet/chapter_recommender-systems/autorec.ipynb | rtp-aws/devpost_aws_disaster_recovery | 2ccfff2d8b85614f3043f09d98c9981dedf43c05 | [
"MIT"
] | null | null | null | 42.649909 | 794 | 0.510555 | [
[
[
"# AutoRec: Rating Prediction with Autoencoders\n\nAlthough the matrix factorization model achieves decent performance on the rating prediction task, it is essentially a linear model. Thus, such models are not capable of capturing complex nonlinear and intricate relationships that may be predictive of users' preferences. In this section, we introduce a nonlinear neural network collaborative filtering model, AutoRec :cite:`Sedhain.Menon.Sanner.ea.2015`. It identifies collaborative filtering (CF) with an autoencoder architecture and aims to integrate nonlinear transformations into CF on the basis of explicit feedback. Neural networks have been proven to be capable of approximating any continuous function, making it suitable to address the limitation of matrix factorization and enrich the expressiveness of matrix factorization.\n\nOn one hand, AutoRec has the same structure as an autoencoder which consists of an input layer, a hidden layer, and a reconstruction (output) layer. An autoencoder is a neural network that learns to copy its input to its output in order to code the inputs into the hidden (and usually low-dimensional) representations. In AutoRec, instead of explicitly embedding users/items into low-dimensional space, it uses the column/row of the interaction matrix as the input, then reconstructs the interaction matrix in the output layer.\n\nOn the other hand, AutoRec differs from a traditional autoencoder: rather than learning the hidden representations, AutoRec focuses on learning/reconstructing the output layer. It uses a partially observed interaction matrix as the input, aiming to reconstruct a completed rating matrix. In the meantime, the missing entries of the input are filled in the output layer via reconstruction for the purpose of recommendation. \n\nThere are two variants of AutoRec: user-based and item-based. For brevity, here we only introduce the item-based AutoRec. User-based AutoRec can be derived accordingly.\n\n\n## Model\n\nLet $\\mathbf{R}_{*i}$ denote the $i^\\mathrm{th}$ column of the rating matrix, where unknown ratings are set to zeros by default. The neural architecture is defined as:\n\n$$\nh(\\mathbf{R}_{*i}) = f(\\mathbf{W} \\cdot g(\\mathbf{V} \\mathbf{R}_{*i} + \\mu) + b)\n$$\n\nwhere $f(\\cdot)$ and $g(\\cdot)$ represent activation functions, $\\mathbf{W}$ and $\\mathbf{V}$ are weight matrices, $\\mu$ and $b$ are biases. Let $h( \\cdot )$ denote the whole network of AutoRec. The output $h(\\mathbf{R}_{*i})$ is the reconstruction of the $i^\\mathrm{th}$ column of the rating matrix.\n\nThe following objective function aims to minimize the reconstruction error:\n\n$$\n\\underset{\\mathbf{W},\\mathbf{V},\\mu, b}{\\mathrm{argmin}} \\sum_{i=1}^M{\\parallel \\mathbf{R}_{*i} - h(\\mathbf{R}_{*i})\\parallel_{\\mathcal{O}}^2} +\\lambda(\\| \\mathbf{W} \\|_F^2 + \\| \\mathbf{V}\\|_F^2)\n$$\n\nwhere $\\| \\cdot \\|_{\\mathcal{O}}$ means only the contribution of observed ratings are considered, that is, only weights that are associated with observed inputs are updated during back-propagation.\n",
"_____no_output_____"
]
],
[
[
"import mxnet as mx\nfrom mxnet import autograd, gluon, np, npx\nfrom mxnet.gluon import nn\nfrom d2l import mxnet as d2l\n\nnpx.set_np()",
"_____no_output_____"
]
],
[
[
"## Implementing the Model\n\nA typical autoencoder consists of an encoder and a decoder. The encoder projects the input to hidden representations and the decoder maps the hidden layer to the reconstruction layer. We follow this practice and create the encoder and decoder with dense layers. The activation of encoder is set to `sigmoid` by default and no activation is applied for decoder. Dropout is included after the encoding transformation to reduce over-fitting. The gradients of unobserved inputs are masked out to ensure that only observed ratings contribute to the model learning process.\n",
"_____no_output_____"
]
],
[
[
"class AutoRec(nn.Block):\n def __init__(self, num_hidden, num_users, dropout=0.05):\n super(AutoRec, self).__init__()\n self.encoder = nn.Dense(num_hidden, activation='sigmoid',\n use_bias=True)\n self.decoder = nn.Dense(num_users, use_bias=True)\n self.dropout = nn.Dropout(dropout)\n\n def forward(self, input):\n hidden = self.dropout(self.encoder(input))\n pred = self.decoder(hidden)\n if autograd.is_training(): # Mask the gradient during training\n return pred * np.sign(input)\n else:\n return pred",
"_____no_output_____"
]
],
[
[
"## Reimplementing the Evaluator\n\nSince the input and output have been changed, we need to reimplement the evaluation function, while we still use RMSE as the accuracy measure.\n",
"_____no_output_____"
]
],
[
[
"def evaluator(network, inter_matrix, test_data, devices):\n scores = []\n for values in inter_matrix:\n feat = gluon.utils.split_and_load(values, devices, even_split=False)\n scores.extend([network(i).asnumpy() for i in feat])\n recons = np.array([item for sublist in scores for item in sublist])\n # Calculate the test RMSE\n rmse = np.sqrt(np.sum(np.square(test_data - np.sign(test_data) * recons))\n / np.sum(np.sign(test_data)))\n return float(rmse)",
"_____no_output_____"
]
],
[
[
"## Training and Evaluating the Model\n\nNow, let us train and evaluate AutoRec on the MovieLens dataset. We can clearly see that the test RMSE is lower than the matrix factorization model, confirming the effectiveness of neural networks in the rating prediction task.\n",
"_____no_output_____"
]
],
[
[
"devices = d2l.try_all_gpus()\n# Load the MovieLens 100K dataset\ndf, num_users, num_items = d2l.read_data_ml100k()\ntrain_data, test_data = d2l.split_data_ml100k(df, num_users, num_items)\n_, _, _, train_inter_mat = d2l.load_data_ml100k(train_data, num_users,\n num_items)\n_, _, _, test_inter_mat = d2l.load_data_ml100k(test_data, num_users,\n num_items)\ntrain_iter = gluon.data.DataLoader(train_inter_mat, shuffle=True,\n last_batch=\"rollover\", batch_size=256,\n num_workers=d2l.get_dataloader_workers())\ntest_iter = gluon.data.DataLoader(np.array(train_inter_mat), shuffle=False,\n last_batch=\"keep\", batch_size=1024,\n num_workers=d2l.get_dataloader_workers())\n# Model initialization, training, and evaluation\nnet = AutoRec(500, num_users)\nnet.initialize(ctx=devices, force_reinit=True, init=mx.init.Normal(0.01))\nlr, num_epochs, wd, optimizer = 0.002, 25, 1e-5, 'adam'\nloss = gluon.loss.L2Loss()\ntrainer = gluon.Trainer(net.collect_params(), optimizer,\n {\"learning_rate\": lr, 'wd': wd})\nd2l.train_recsys_rating(net, train_iter, test_iter, loss, trainer, num_epochs,\n devices, evaluator, inter_mat=test_inter_mat)",
"train loss 0.000, test RMSE 0.898\n34453968.4 examples/sec on [gpu(0), gpu(1)]\n"
]
],
[
[
"## Summary\n\n* We can frame the matrix factorization algorithm with autoencoders, while integrating non-linear layers and dropout regularization. \n* Experiments on the MovieLens 100K dataset show that AutoRec achieves superior performance than matrix factorization.\n\n\n\n## Exercises\n\n* Vary the hidden dimension of AutoRec to see its impact on the model performance.\n* Try to add more hidden layers. Is it helpful to improve the model performance?\n* Can you find a better combination of decoder and encoder activation functions?\n",
"_____no_output_____"
],
[
"[Discussions](https://discuss.d2l.ai/t/401)\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d0c237d677b7ec885747abf21a8da4ab1f20f7aa | 226,085 | ipynb | Jupyter Notebook | tutorials/MD/04_Trajectory_Analysis_Solutions.ipynb | AfroditiMariaZaki/OxCompBio | 15d23de729c1558d80f476bf5a0a1f1cf3311bc6 | [
"BSD-3-Clause"
] | null | null | null | tutorials/MD/04_Trajectory_Analysis_Solutions.ipynb | AfroditiMariaZaki/OxCompBio | 15d23de729c1558d80f476bf5a0a1f1cf3311bc6 | [
"BSD-3-Clause"
] | null | null | null | tutorials/MD/04_Trajectory_Analysis_Solutions.ipynb | AfroditiMariaZaki/OxCompBio | 15d23de729c1558d80f476bf5a0a1f1cf3311bc6 | [
"BSD-3-Clause"
] | null | null | null | 308.85929 | 35,044 | 0.928894 | [
[
[
"# <span style='color:darkred'> 4 Trajectory Analysis </span>\n\n***",
"_____no_output_____"
],
[
"**<span style='color:darkred'> Important Note </span>**\n\nBefore proceeding to the rest of the analysis, it is a good time to define a path that points to the location of the MD simulation data, which we will analyze here.\n\nIf you successfully ran the MD simulation, the correct path should be:",
"_____no_output_____"
]
],
[
[
"path=\"OxCompBio-Datafiles/run\"",
"_____no_output_____"
]
],
[
[
"If however, you need/want to use the data from the simulation that has been already performed, uncomment the command below to instead define the path that points to the prerun simulation.",
"_____no_output_____"
]
],
[
[
"#path=\"OxCompBio-Datafiles/prerun/run\"",
"_____no_output_____"
]
],
[
[
"## <span style='color:darkred'> 4.1 Visualize the simulation </span>\n\nThe simplest and easiest type of analysis you should always do is to look at it with your eyes! Your eyes will tell you if something strange is happening immediately. A numerical analysis may not.\n\n### <span style='color:darkred'> 4.1.1 VMD </span>\n\n*Note: Again, this step is optional. If you don't have VMD, go to section 4.1.2 below to visualize the trajectory with NGLView instead.*\n\nLet us look at the simulations on VMD.\n\nOpen your vmd, by typing on your terminal:\n\n`% vmd`\n\nWhen it has finished placing all the windows on the screen. Click on `File` in the VMD main menu window and select `New Molecule`. The Molecule File Browser window should appear. Click on `Browse...` then select the `OxCompBio-Datafiles` and then the `run` directory and finally select `em.gro` (i.e. the file you made that has protein system energy minimized). Click `OK` and then click `Load`. It should load up the starting coordinates into the main window. Then click `Browse...` in the Molecule File Browser window. Select again the `OxCompBio-Datafiles`, then the `run` directory and then `md.xtc`. Select `OK` and then hit `Load`. The trajectory should start loading into the main VMD window. \n\nAlthough things will be moving, you can see that it is quite difficult to visualize the individual components. That is one of the problems with simulating such large and complicated systems. VMD makes it quite easy to look at individual components of a system. For example, let us consider the protein only. On the VMD Main menu, left-click on Graphics and select `Representations`. A new menu will appear (`Graphical Representations`). In the box entitled `Selected Atoms` type protein and hit enter. Only those atoms that form part of the protein are now selected. Various other selections and drawing methods will help to visualize different aspects of the simulation. ",
"_____no_output_____"
],
[
"<span style='color:Blue'> **Questions** </span> \n\n* How would you say the protein behaves? \n\n\n* Is it doing anything unexpected? What would you consider unexpected behaviour?",
"_____no_output_____"
],
[
"### <span style='color:darkred'> 4.1.2 NGLView </span>\n",
"_____no_output_____"
],
[
"You have already tested NGLView at the Python tutorial (Notebook `12_ProteinAnalysis`) and at the beginning of this tutorial. This time however, you can visualize the trajectory you generated after carrying out the MD simulation.\n\nYou should also be familiar now with the MDAnalysis Python library that we will use to analyze the MD trajectory. We will also use it below, to create a Universe and load it on NGLView.",
"_____no_output_____"
]
],
[
[
"# Import MDAnalysis and NGLView\nimport MDAnalysis\nimport nglview\n\n# Load the protein structure and the trajectory as a universe named protein\nprotein=MDAnalysis.Universe(f\"{path}/em.gro\", f\"{path}/md_fit.xtc\")\nprotein_view = nglview.show_mdanalysis(protein)\nprotein_view.gui_style = 'ngl'\n\n#Color the protein based on its secondary structure\nprotein_view.update_cartoon(color='sstruc')\nprotein_view\n\n",
"_____no_output_____"
]
],
[
[
"<span style='color:Blue'> **Questions** </span> \n\n* How would you say the protein behaves? \n\n\n* Is it doing anything unexpected? What would you consider unexpected behaviour?",
"_____no_output_____"
],
[
"Now that we are sure the simulation is not doing anything ridiculous, we can start to ask questions about the simulation. The first thing to establish is whether the simulation has equilibrated to some state. So what are some measures of the system\nbeing equilibrated? And what can we use to test the reliability of the simulation?",
"_____no_output_____"
],
[
"## <span style='color:darkred'> 4.2 System Equilibration </span> ",
"_____no_output_____"
],
[
"### <span style='color:darkred'> 4.2.1 Temperature fluctuation </span>\n\nThe system temperature as a function of time was calculated in the previous section, with the built-in GROMACS tool `gmx energy`, but we still have not looked at it. It is now time to plot the temperature *vs* time and assess the results.\n\n<span style='color:Blue'> **Questions** </span> \n\n* Does the temperature fluctuate around an equilibrium value?\n\n\n* Does this value correspond to the temperature that we predefined in the `md.mdp` input file?",
"_____no_output_____"
],
[
"Import numpy and pyplot from matplotlib, required to read and plot the data, respectively.",
"_____no_output_____"
]
],
[
[
"# We declare matplotlib inline to make sure it plots properly\n%matplotlib inline\n# We need to import numpy \nimport numpy as np\n# We need pyplot from matplotlib to generate our plots\nfrom matplotlib import pyplot",
"_____no_output_____"
]
],
[
[
"Now, using numpy, we can read the data from the `1hsg_temperature.xvg` file; the first column is the time (in ps) and the secong is the system temperature (in K).",
"_____no_output_____"
]
],
[
[
"# Read the file that contains the system temperature for each frame\ntime=np.loadtxt(f\"{path}/1hsg_temperature.xvg\", comments=['#','@'])[:, 0]\ntemperature=np.loadtxt(f\"{path}/1hsg_temperature.xvg\", comments=['#','@'])[:, 1] ",
"_____no_output_____"
]
],
[
[
"You can use numpy again to compute the average temperature and its standard deviation.",
"_____no_output_____"
]
],
[
[
"# Calculate and print the mean temperature and the standard deviation\n# Keep only two decimal points\nmean_temperature=round(np.mean(temperature), 2)\nstd_temperature=round(np.std(temperature), 2)\nprint(f\"The mean temperature is {mean_temperature} ± {std_temperature} K\")",
"The mean temperature is 300.01 ± 1.79 K\n"
]
],
[
[
"Finally, you can plot the temperature *vs* simulation time.",
"_____no_output_____"
]
],
[
[
"# Plot the temperature\npyplot.plot(time, temperature, color='darkred')\npyplot.title(\"Temperature over time\")\npyplot.xlabel(\"Time [ps]\")\npyplot.ylabel(\"Temperature [K]\")\npyplot.show()",
"_____no_output_____"
]
],
[
[
"### <span style='color:darkred'> 4.2.2 Energy of the system </span> \n\nAnother set of properties that is quite useful to examine is the various energetic contributions to the energy. The total\nenergy should be constant. but the various contributions can change and this can sometimes indicate something\ninteresting or strange happening in your simulation. Let us look at some energetic properties of the simulation.\n\nWe have already exctracted the Lennard-Jones energy, the Coulomb energy and the potential energy using again the GROMACS built-in tool `gmx energy`. The data of these three energetic components are saved in the same file called `1hsg_energies.xvg`; the first column contains the time (in ps) and the columns that follow contain the energies (in kJ/mol), in the same order as they were generated.\n\nWe can now read the data from the `1hsg_energies.xvg` file using numpy.",
"_____no_output_____"
]
],
[
[
"# Read the file that contains the various energetic components for each frame\ntime=np.loadtxt(f\"{path}/1hsg_energies.xvg\", comments=['#','@'])[:, 0]\nlennard_jones=np.loadtxt(f\"{path}/1hsg_energies.xvg\", comments=['#','@'])[:, 1]\ncoulomb=np.loadtxt(f\"{path}/1hsg_energies.xvg\", comments=['#','@'])[:, 2]\npotential=np.loadtxt(f\"{path}/1hsg_energies.xvg\", comments=['#','@'])[:, 3]",
"_____no_output_____"
]
],
[
[
"And now that we read the data file, we can plot the energetic components *vs* simulation time in separate plots using matplotlib.",
"_____no_output_____"
]
],
[
[
"# Plot the Lennard-Jones energy\npyplot.plot(time, lennard_jones, color='blue')\npyplot.title(\"Lennard Jones energy over time\")\npyplot.xlabel(\"Time [ps]\")\npyplot.ylabel(\"LJ energy [kJ/mol]\")\npyplot.show()\n\n# Plot the electrostatic energy\npyplot.plot(time, coulomb, color='purple')\npyplot.title(\"Electrostatic energy over time\")\npyplot.xlabel(\"Time [ps]\")\npyplot.ylabel(\"Coulomb energy [kJ/mol]\")\npyplot.show()\n\n# Plot the potential energy\npyplot.plot(time, potential, color='green')\npyplot.title(\"Potential energy over time\")\npyplot.xlabel(\"Time [ps]\")\npyplot.ylabel(\"Potential energy [kJ/mol]\")\npyplot.show()",
"_____no_output_____"
]
],
[
[
"<span style='color:Blue'> **Questions** </span> \n\n* Can you plot the Coulomb energy and the potential energy, following the same steps as above? \n\n\n* Is the total energy stable in this simulation? \n\n\n* What is the dominant contribution to the potential energy?",
"_____no_output_____"
],
[
"## <span style='color:darkred'> 4.3 Analysis of Protein </span>\n\n### <span style='color:darkred'> 4.3.1 Root mean square deviation (RMSD) of 1HSG </span>\n\nThe RMSD gives us an idea of how 'stable' our protein is when compared to our starting, static, structure. The lower the RMSD is, the more stable we can say our protein is. \n\nThe RMSD as a function of time, $\\rho (t)$, can be defined by the following equation:\n\n\\begin{equation}\n\\\\\n\\rho (t) = \\sqrt{\\frac{1}{N}\\sum^N_{i=1}w_i\\big(\\mathbf{x}_i(t) - \\mathbf{x}^{\\text{ref}}_i\\big)^2}\n\\end{equation}\n\nLuckily MDAnalysis has its own built-in function to calculate this and we can import it.\n\n",
"_____no_output_____"
]
],
[
[
"# Import built-in MDAnalysis tools for alignment and RMSD.\nfrom MDAnalysis.analysis import align\nfrom MDAnalysis.analysis.rms import RMSD as rmsd\n\n# Define the simulation universe and the reference structure (protein structure at first frame)\nprotein = MDAnalysis.Universe(f\"{path}/md.gro\", f\"{path}/md_fit.xtc\")\nprotein_ref = MDAnalysis.Universe(f\"{path}/em.gro\", f\"{path}/md_fit.xtc\")\nprotein_ref.trajectory[0]\n\n# Call the MDAnalysis align function to align the MD simulation universe to the reference (first frame) universe\nalign_strucs = align.AlignTraj(protein, protein_ref, select=\"backbone\", weights=\"mass\", in_memory=True, verbose=True)\n\nR = align_strucs.run()\nrmsd_data = R.rmsd\n\n# Plot the RMSD\npyplot.plot(rmsd_data)\npyplot.title(\"RMSD over time\")\npyplot.xlabel(\"Frame number\")\npyplot.ylabel(\"RMSD (Angstrom)\")\npyplot.show()\n",
"_____no_output_____"
]
],
[
[
"<span style='color:Blue'> **Questions** </span> \n\n* What does this tell you about the stability of the protein? Is it in a state of equilibrium and if so why and at what time?\n\n\n* Can you think of a situation where this approach might not be a very good indication of stability?",
"_____no_output_____"
],
[
"### <span style='color:darkred'> 4.3.2 Root mean square fluctuation (RMSF) of 1HSG </span>\n\nA similar property that is particularly useful is the root mean square fluctuation (RMSF), which shows how each residue flucuates over its average position.\n\nThe RMSF for an atom, $\\rho_i$, is given by:\n\n\\begin{equation}\n\\rho_i = \\sqrt{\\sum^N_{i=1} \\big\\langle(\\mathbf{x}_i - \\langle \\mathbf{x}_i \\rangle )^2 \\big\\rangle }\n\\end{equation}",
"_____no_output_____"
]
],
[
[
"from MDAnalysis.analysis.rms import RMSF as rmsf\n\n# Define again the simulation universe, using however the renumbered .gro file that you had generated earlier\nprotein = MDAnalysis.Universe(f\"{path}/em.gro\", f\"{path}/md_fit.xtc\")\n\n# Reset the trajectory to the first frame\nprotein.trajectory[0]\n\n# We will need to select the alpha Carbons only\ncalphas = protein.select_atoms(\"name CA\")\n\n# Compute the RMSF of alpha carbons. Omit the first 20 frames,\n# assuming that the system needs this amount of time (200 ps) to equilibrate\nrmsf_calc = rmsf(calphas, verbose=True).run(start=20)\n\n# Plot the RMSF\npyplot.plot(calphas.resindices+1, rmsf_calc.rmsf, color='darkorange' )\npyplot.title(\"Per-Residue Alpha Carbon RMSF\")\npyplot.xlabel(\"Residue Number\")\npyplot.ylabel(\"RMSF (Angstrom)\")\npyplot.show()",
"_____no_output_____"
]
],
[
[
"<span style='color:Blue'> **Questions** </span> \n\n* Can you identify structural regions alone from this plot and does that fit in with the structure?\n\n\n* Residues 43-58 form part of the flexible flap that covers the binding site. How does this region behave in the simulation?",
"_____no_output_____"
],
[
"### <span style='color:darkred'> 4.3.3 Hydrogen Bond Formation </span>\n\nWe can also use the simulation to monitor the formation of any hydrogen bonds that may be of interest.\n\nIn the case of HIV-1 protease, the hydrogen bonds (HB) that are formed between the ARG8', the ASP29 and the ARG87 amino acids at the interface of the two subunits act in stabilising the dimer.\n\nWe can analyse the trajectory and monitor the stability of these interactions *vs* simulation time.",
"_____no_output_____"
]
],
[
[
"# Import the MDAnalysis built-in tool for HB Analysis\nfrom MDAnalysis.analysis.hydrogenbonds.hbond_analysis import HydrogenBondAnalysis as HBA\n\n# Define the protein universe\n# Note that when using this tool, it is recommended to include the .tpr file instead of the .gro file,\n# because it contains bond information, required for the identification of donors and acceptors.\nprotein = MDAnalysis.Universe(f\"{path}/md.tpr\", f\"{path}/md.xtc\")\n\n# Define the atom selections for the HB calculation.\n# In this case, the ARG hydrogens and the ASP oxygens, which act as the HB acceptors are specifically defined.\nhbonds = HBA(universe=protein, hydrogens_sel='resname ARG and name HH21 HH22', acceptors_sel='resname ASP and name OD1 OD2')\n\n# Perform the HB calculation\nhbonds.run()\n \n# Plot the total number of ASP-ARG HBs vs time\nhbonds_time=hbonds.times\nhbonds_data=hbonds.count_by_time()\n\npyplot.plot(hbonds_time, hbonds_data, color='darkorange')\npyplot.title(\"ASP-ARG Hydrogen Bonds\")\npyplot.xlabel(\"Time [ps]\")\npyplot.ylabel(\"# Hydrogen Bonds\")\npyplot.show()\n\n# Compute and print the average number of HBs and the standard deviation\naver_hbonds=round(np.mean(hbonds_data), 2)\nstd_hbonds=round(np.std(hbonds_data), 2)\nprint(f\"The average number of ASP-ARG HBs is {aver_hbonds} ± {std_hbonds}\")",
"/home/mjkikaz2/anaconda3/envs/oxpy/lib/python3.6/site-packages/MDAnalysis/core/topologyattrs.py:2011: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray\n np.array(sorted(unique_bonds)), 4)\n"
]
],
[
[
"<span style='color:Blue'> **Questions** </span> \n\n* How much variation is there in the number of hydrogen bonds?\n\n\n* Do any break and not reform? \n\n\n* Using VMD, can you observe the HB formation and breakage throughout the simulation?\n\n***",
"_____no_output_____"
],
[
"This concludes the analysis section, but the aim was only to give you an idea of the numerous information that we can gain when analysing an MD trajectory. Feel free to ask and attempt to answer your own questions, utilising the tools that you were introduced to during the tutorial.",
"_____no_output_____"
],
[
"## <span style='color:darkred'> 4.4 Further Reading </span>\n\n\nThe texts recommended here are the same as those mentioned in the lecture:\n* \"Molecular Modelling. Principles and Applications\". Andrew Leach. Publisher: Prentice Hall. ISBN: 0582382106. This book has rapidly become the defacto introductory text for all aspects of simulation.\n* \"Computer simulation of liquids\". Allen, Michael P., and Dominic J. Tildesley. Oxford university press, 2017.\n* \"Molecular Dynamics Simulation: Elementary Methods\". J.M. Haile. Publisher: Wiley. ISBN: 047118439X. This text provides a more focus but slightly more old-fashioned view of simulation. It has some nice simple examples of how to code (in fortran) some of the algorithms though.\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
d0c247dc1481762c5c773d3ef6b3dd69efde1477 | 780,841 | ipynb | Jupyter Notebook | notebooks/peller/Validity_range.ipynb | JanWeldert/freeDOM | 242fc7e76943bb47f2d7cca3f56f77606f260e40 | [
"Apache-2.0"
] | null | null | null | notebooks/peller/Validity_range.ipynb | JanWeldert/freeDOM | 242fc7e76943bb47f2d7cca3f56f77606f260e40 | [
"Apache-2.0"
] | 16 | 2020-05-21T02:02:25.000Z | 2022-03-07T10:41:55.000Z | notebooks/peller/Validity_range.ipynb | JanWeldert/freeDOM | 242fc7e76943bb47f2d7cca3f56f77606f260e40 | [
"Apache-2.0"
] | 5 | 2020-05-06T09:49:39.000Z | 2021-05-12T15:58:14.000Z | 1,571.108652 | 365,004 | 0.955645 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\n\n%load_ext autoreload\n%autoreload 2\n\nfrom freedom.utils.i3cols_dataloader import load_hits, load_strings\n\nimport dragoman as dm\n\n%load_ext line_profiler",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
],
[
"plt.rcParams['figure.figsize'] = [12., 8.]\nplt.rcParams['xtick.labelsize'] = 14\nplt.rcParams['ytick.labelsize'] = 14 \nplt.rcParams['axes.labelsize'] = 16\nplt.rcParams['axes.titlesize'] = 16\nplt.rcParams['legend.fontsize'] = 14",
"_____no_output_____"
],
[
"single_hits, repeated_params, labels = load_hits('/home/iwsatlas1/peller/work/oscNext/level3_v01.03/140000_i3cols') #,'/home/iwsatlas1/peller/work/oscNext/level3_v01.03/120000_i3cols'])",
"_____no_output_____"
],
[
"strings, params, labels = load_strings('/home/iwsatlas1/peller/work/oscNext/level3_v01.03/140000_i3cols')",
"_____no_output_____"
],
[
"strings = strings.reshape(-1, 86, 5)",
"_____no_output_____"
],
[
"string_charge = np.sum(strings[:, :, 3], axis =0)",
"_____no_output_____"
],
[
"plt.bar(np.arange(86),string_charge)\nplt.savefig('../../plots/charge_per_string.png')",
"_____no_output_____"
],
[
"hits = dm.PointData()\nhits['delta_time'] = - (repeated_params[:, 3] - single_hits[:, 3])",
"_____no_output_____"
],
[
"hits.histogram(delta_time=100).plot()\nplt.gca().set_yscale('log')\nplt.gca().set_xlabel(r'$t_{hit} - t_{vertex}$ (ns)')\nplt.savefig('../../plots/delta_time_range.png')",
"_____no_output_____"
],
[
"params",
"_____no_output_____"
],
[
"mc = dm.PointData()\nfor i, label in enumerate(labels):\n mc[label] = params[:, i]",
"_____no_output_____"
],
[
"mc",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(2, 4, figsize=(25, 10))\nplt.subplots_adjust(wspace=0.35)\n\ndef plot(x, y, ax):\n np.log(mc.histogram(**{x:100, y:100})['counts']).plot(cmap='Spectral_r', cbar=True, label='log(# events)', ax=ax)\n \nplot('x', 'y', ax[0,0])\nplot('x', 'z', ax[0,1])\nplot('y', 'z', ax[0,2])\nplot('time', 'z', ax[0,3])\nplot('azimuth', 'zenith', ax[1,0])\nplot('zenith', 'z', ax[1,1])\nplot('azimuth', 'x', ax[1,2])\nplot('cascade_energy', 'track_energy', ax[1,3])\n\n\n#np.log(mc.histogram(x=100, y=100)['counts']).plot(cmap='Spectral_r', cbar=True, label='log(# events)')\n#np.log(mc.kde(x=100, y=100, density=False)['counts']).plot(cmap='Spectral_r', cbar=True, label='log(# events)')\nplt.savefig('../../plots/validity.png')",
"/home/iwsatlas1/peller/dragoman/dragoman/core/gridarray.py:354: RuntimeWarning: divide by zero encountered in log\n ufunc, method, *inputs, **kwargs\n"
],
[
"np.log(mc.histogram(**{'x':100, 'z':100})['counts']).plot(cmap='Spectral_r', cbar=True, label='log(# events)')",
"_____no_output_____"
],
[
"np.log(mc.histogram(cascade_energy=100, track_energy=100)['counts']).plot(cmap='Spectral_r', cbar=True, label='log(# events)')",
"_____no_output_____"
],
[
"np.median(mc['x'])",
"_____no_output_____"
],
[
"np.median(mc['y'])",
"_____no_output_____"
],
[
"inside = ((mc['x'] - 15)**2 + ((mc['y'] + 35)*1.3)**2 < 250**2) & (mc['z'] < -120) & (mc['z'] > -600) & (mc['track_energy'] + mc['cascade_energy'] < 2000) & (mc['time'] > 8500) & (mc['time'] < 10500)",
"_____no_output_____"
],
[
"mc[inside]",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(2, 4, figsize=(25, 10))\nplt.subplots_adjust(wspace=0.35)\n\ndef plot(x, y, ax):\n np.log(mc[inside].histogram(**{x:100, y:100})['counts']).plot(cmap='Spectral_r', cbar=True, label='log(# events)', ax=ax)\n \nplot('x', 'y', ax[0,0])\nplot('x', 'z', ax[0,1])\nplot('y', 'z', ax[0,2])\nplot('time', 'z', ax[0,3])\nplot('azimuth', 'zenith', ax[1,0])\nplot('zenith', 'z', ax[1,1])\nplot('azimuth', 'x', ax[1,2])\nplot('cascade_energy', 'track_energy', ax[1,3])",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c24bbbb832f5eca2acd73a6adf2e543e566f2b | 6,725 | ipynb | Jupyter Notebook | test/Models/pheno_pkg/test/r/Cumulttfrom.ipynb | cyrillemidingoyi/PyCropML | b866cc17374424379142d9162af985c1f87c74b6 | [
"MIT"
] | 5 | 2020-06-21T18:58:04.000Z | 2022-01-29T21:32:28.000Z | test/Models/pheno_pkg/test/r/Cumulttfrom.ipynb | cyrillemidingoyi/PyCropML | b866cc17374424379142d9162af985c1f87c74b6 | [
"MIT"
] | 27 | 2018-12-04T15:35:44.000Z | 2022-03-11T08:25:03.000Z | test/Models/pheno_pkg/test/r/Cumulttfrom.ipynb | cyrillemidingoyi/PyCropML | b866cc17374424379142d9162af985c1f87c74b6 | [
"MIT"
] | 7 | 2019-04-20T02:25:22.000Z | 2021-11-04T07:52:35.000Z | 44.243421 | 148 | 0.456803 | [
[
[
"# Automatic generation of Notebook using PyCropML\n This notebook implements a crop model.",
"_____no_output_____"
],
[
"### Model Cumulttfrom",
"_____no_output_____"
]
],
[
[
"model_cumulttfrom <- function (calendarMoments_t1 = c('Sowing'),\n calendarCumuls_t1 = c(0.0),\n cumulTT = 8.0){\n #'- Name: CumulTTFrom -Version: 1.0, -Time step: 1\n #'- Description:\n #' * Title: CumulTTFrom Model\n #' * Author: Pierre Martre\n #' * Reference: Modeling development phase in the \n #' Wheat Simulation Model SiriusQuality.\n #' See documentation at http://www1.clermont.inra.fr/siriusquality/?page_id=427\n #' * Institution: INRA Montpellier\n #' * Abstract: Calculate CumulTT \n #'- inputs:\n #' * name: calendarMoments_t1\n #' ** description : List containing appearance of each stage at previous day\n #' ** variablecategory : state\n #' ** datatype : STRINGLIST\n #' ** default : ['Sowing']\n #' ** unit : \n #' ** inputtype : variable\n #' * name: calendarCumuls_t1\n #' ** description : list containing for each stage occured its cumulated thermal times at previous day\n #' ** variablecategory : state\n #' ** datatype : DOUBLELIST\n #' ** default : [0.0]\n #' ** unit : °C d\n #' ** inputtype : variable\n #' * name: cumulTT\n #' ** description : cumul TT at current date\n #' ** datatype : DOUBLE\n #' ** variablecategory : auxiliary\n #' ** min : -200\n #' ** max : 10000\n #' ** default : 8.0\n #' ** unit : °C d\n #' ** inputtype : variable\n #'- outputs:\n #' * name: cumulTTFromZC_65\n #' ** description : cumul TT from Anthesis to current date \n #' ** variablecategory : auxiliary\n #' ** datatype : DOUBLE\n #' ** min : 0\n #' ** max : 5000\n #' ** unit : °C d\n #' * name: cumulTTFromZC_39\n #' ** description : cumul TT from FlagLeafLiguleJustVisible to current date \n #' ** variablecategory : auxiliary\n #' ** datatype : DOUBLE\n #' ** min : 0\n #' ** max : 5000\n #' ** unit : °C d\n #' * name: cumulTTFromZC_91\n #' ** description : cumul TT from EndGrainFilling to current date \n #' ** variablecategory : auxiliary\n #' ** datatype : DOUBLE\n #' ** min : 0\n #' ** max : 5000\n #' ** unit : °C d\n cumulTTFromZC_65 <- 0.0\n cumulTTFromZC_39 <- 0.0\n cumulTTFromZC_91 <- 0.0\n if ('Anthesis' %in% calendarMoments_t1)\n {\n cumulTTFromZC_65 <- cumulTT - calendarCumuls_t1[which(calendarMoments_t1 %in% 'Anthesis')]\n }\n if ('FlagLeafLiguleJustVisible' %in% calendarMoments_t1)\n {\n cumulTTFromZC_39 <- cumulTT - calendarCumuls_t1[which(calendarMoments_t1 %in% 'FlagLeafLiguleJustVisible')]\n }\n if ('EndGrainFilling' %in% calendarMoments_t1)\n {\n cumulTTFromZC_91 <- cumulTT - calendarCumuls_t1[which(calendarMoments_t1 %in% 'EndGrainFilling')]\n }\n return (list (\"cumulTTFromZC_65\" = cumulTTFromZC_65,\"cumulTTFromZC_39\" = cumulTTFromZC_39,\"cumulTTFromZC_91\" = cumulTTFromZC_91))\n}",
"_____no_output_____"
],
[
"library(assertthat)\n\n\ntest_test_wheat1<-function(){\n params= model_cumulttfrom(\n calendarMoments_t1 = c(\"Sowing\",\"Emergence\",\"FloralInitiation\",\"FlagLeafLiguleJustVisible\",\"Heading\",\"Anthesis\"),\n calendarCumuls_t1 = c(0.0,112.330110409888,354.582294511779,741.510096671757,853.999637026622,954.59002776961),\n cumulTT = 972.970888983105\n )\n cumulTTFromZC_65_estimated = params$cumulTTFromZC_65\n cumulTTFromZC_65_computed = 18.38\n assert_that(all.equal(cumulTTFromZC_65_estimated, cumulTTFromZC_65_computed, scale=1, tol=0.2)==TRUE)\n cumulTTFromZC_39_estimated = params$cumulTTFromZC_39\n cumulTTFromZC_39_computed = 231.46\n assert_that(all.equal(cumulTTFromZC_39_estimated, cumulTTFromZC_39_computed, scale=1, tol=0.2)==TRUE)\n cumulTTFromZC_91_estimated = params$cumulTTFromZC_91\n cumulTTFromZC_91_computed = 0\n assert_that(all.equal(cumulTTFromZC_91_estimated, cumulTTFromZC_91_computed, scale=1, tol=0.2)==TRUE)\n}\ntest_test_wheat1()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
d0c261adbb23c28d7bfbaf49628679cea6f29521 | 196,348 | ipynb | Jupyter Notebook | StoreItemDemand/BenchmarkModels.ipynb | AidanCooper/Kaggle | 01dff47392806778b1c78124b279f7dfb9709cc0 | [
"MIT"
] | null | null | null | StoreItemDemand/BenchmarkModels.ipynb | AidanCooper/Kaggle | 01dff47392806778b1c78124b279f7dfb9709cc0 | [
"MIT"
] | 1 | 2020-03-31T11:54:20.000Z | 2020-03-31T11:54:20.000Z | StoreItemDemand/BenchmarkModels.ipynb | AidanCooper/Kaggle | 01dff47392806778b1c78124b279f7dfb9709cc0 | [
"MIT"
] | 1 | 2022-02-21T07:34:18.000Z | 2022-02-21T07:34:18.000Z | 382 | 184,040 | 0.933384 | [
[
[
"# Store Item Demand Forecasting Challenge",
"_____no_output_____"
],
[
"## Benchmark Models\n\n<a href=\"https://www.kaggle.com/c/demand-forecasting-kernels-only\">Link to competition on Kaggle.</a>\n\nIn this notebook, two simple benchmarking techniques are presented.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\npd.options.display.max_columns = 99\nplt.rcParams['figure.figsize'] = (16, 9)",
"_____no_output_____"
]
],
[
[
"## Load Data",
"_____no_output_____"
]
],
[
[
"df_train = pd.read_csv('data/train.csv', parse_dates=['date'], index_col=['date'])\ndf_test = pd.read_csv('data/test.csv', parse_dates=['date'], index_col=['date'])\ndf_train.shape, df_test.shape",
"_____no_output_____"
],
[
"df_train.head()",
"_____no_output_____"
],
[
"num_stores = len(df_train['store'].unique())\nfig, axes = plt.subplots(num_stores, figsize=(8, 16))\n\nfor s in df_train['store'].unique():\n t = df_train.loc[df_train['store'] == s, 'sales'].resample('W').sum()\n ax = t.plot(ax=axes[s-1])\n ax.grid()\n ax.set_xlabel('')\n ax.set_ylabel('sales')\nfig.tight_layout();",
"_____no_output_____"
]
],
[
[
"All stores appear to show identical trends and seasonality; they just differ in scale.",
"_____no_output_____"
],
[
"## Average Method\n\nFor our first and simplest model, we make our predictions using the average value from the historical data.",
"_____no_output_____"
],
[
"For our first and simplest model, we make our predictions using the average value from the historical data.",
"_____no_output_____"
]
],
[
[
"am_results = df_test.copy()\nam_results['sales'] = 0\n\nfor s in am_results['store'].unique():\n for i in am_results['item'].unique():\n historical_average = df_train.loc[(df_train['store'] == s) & (df_train['item'] == i), 'sales'].mean()\n am_results.loc[(am_results['store'] == s) & (am_results['item'] == i), 'sales'] = historical_average",
"_____no_output_____"
],
[
"am_results.reset_index(inplace=True)\nam_results.drop(['date', 'store', 'item'], axis=1, inplace=True)\nam_results.head()",
"_____no_output_____"
],
[
"am_results.to_csv('am_results.csv', index=False)",
"_____no_output_____"
]
],
[
[
"Scores 28.35111 on the leaderboard.",
"_____no_output_____"
],
[
"## Seasonal Naive Method\n\nFor this model, we predict the value from the same time the previous year.",
"_____no_output_____"
]
],
[
[
"snm_results = df_test.copy()\nsnm_results['sales'] = 0",
"_____no_output_____"
],
[
"import datetime\n\nprev_dates = snm_results.loc[(snm_results['store'] == 1) & (snm_results['item'] == 1)].index - datetime.timedelta(days=365)",
"_____no_output_____"
],
[
"for s in snm_results['store'].unique():\n for i in snm_results['item'].unique():\n snm_results.loc[(snm_results['store'] == s) & (snm_results['item'] == i), 'sales'] = \\\n df_train.loc[((df_train['store'] == s) & (df_train['item'] == i)) & (df_train.index.isin(prev_dates)), 'sales'].values",
"_____no_output_____"
],
[
"snm_results.reset_index(inplace=True)\nsnm_results.drop(['date', 'store', 'item'], axis=1, inplace=True)\nsnm_results.head()",
"_____no_output_____"
],
[
"snm_results.to_csv('snm_results.csv', index=False)",
"_____no_output_____"
]
],
[
[
"Scores 24.43958 on the leaderboard.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0c2715aef848c9e6861c565e5a77c1068c8ebf6 | 69,603 | ipynb | Jupyter Notebook | calrecycle-disposal-reporting/calrecycle-report-1.ipynb | fndari/catdd-datasets | 24b3f67242507ab14232e0af382b55ea9e954699 | [
"MIT"
] | 1 | 2018-11-17T07:03:38.000Z | 2018-11-17T07:03:38.000Z | calrecycle-disposal-reporting/calrecycle-report-1.ipynb | fndari/catdd-datasets | 24b3f67242507ab14232e0af382b55ea9e954699 | [
"MIT"
] | null | null | null | calrecycle-disposal-reporting/calrecycle-report-1.ipynb | fndari/catdd-datasets | 24b3f67242507ab14232e0af382b55ea9e954699 | [
"MIT"
] | null | null | null | 38.180472 | 184 | 0.36632 | [
[
[
"import requests\nimport requests_cache\nrequests_cache.install_cache('calrecycle')\nimport pandas as pd\nimport time",
"_____no_output_____"
],
[
"URL = 'https://www2.calrecycle.ca.gov/LGCentral/DisposalReporting/Destination/CountywideSummary'\nparams = {'CountyID': 58, 'ReportFormat': 'XLS'}\nresp = requests.post(URL, data=params)\nresp",
"_____no_output_____"
],
[
"import io",
"_____no_output_____"
],
[
"def set_columns(df, columns=None, row_idx=None):\n df = df.copy()\n if row_idx:\n columns = df.iloc[row_idx, :].tolist()\n df.columns = columns\n return df",
"_____no_output_____"
],
[
"(pd.read_excel(io.BytesIO(resp.content))\n# .iloc[4,:].tolist()\n .pipe(set_columns, row_idx=4)\n .iloc[5:, :]\n .dropna(axis=1, how='all')\n .assign(is_data_row=lambda d: d['Destination Facility'].notnull())\n .fillna(method='ffill')\n .query('is_data_row')\n)",
"WARNING *** file size (47797) not 512 + multiple of sector size (512)\n"
],
[
"def make_throttle_hook(timeout=1):\n \"\"\"\n Returns a response hook function which sleeps for `timeout` seconds if\n response is not coming from the cache.\n\n From https://requests-cache.readthedocs.io/en/latest/user_guide.html#usage\n \"\"\"\n def hook(response, *args, **kwargs):\n if not getattr(response, 'from_cache', False):\n print(f'{response} not found in cache. Timeout for {timeout:.3f} s.')\n time.sleep(timeout)\n return response\n return hook\n\ndef get_session(rate_max=.5, timeout=None):\n \n timeout = 1 / rate_max\n\n s = requests_cache.CachedSession()\n s.hooks = {'response': make_throttle_hook(timeout)}\n return s",
"_____no_output_____"
],
[
"def process(df):\n return (df\n .pipe(set_columns, row_idx=4)\n .iloc[5:, :]\n .dropna(axis=1, how='all')\n .assign(is_data_row=lambda d: d['Destination Facility'].notnull())\n .fillna(method='ffill')\n .query('is_data_row')\n .drop(columns=['is_data_row'])\n )\n\ndef get_df(resp):\n if resp.ok:\n return pd.read_excel(io.BytesIO(resp.content))\n return pd.DataFrame()\n\n# so ducky...\ndef get_report(county_id, session=requests):\n params = {'CountyID': int(county_id), 'ReportFormat': 'XLS'}\n # if \"no record found\", the server should return 404 instead of a 200 response with an empty XLS\n resp = session.post(URL, data=params)\n try:\n df = get_df(resp).pipe(process).assign(county_id=county_id)\n except Exception as e:\n print(e)\n else:\n return df\n\ndef get_reports():\n dfs = []\n# sesh = get_session(rate_max=2)\n ids = range(1, 58)\n for county_id in ids:\n df = get_report(county_id)\n if df is not None:\n dfs.append(df)\n else:\n print(f'county_id {county_id} not processed')\n # TODO else append to missed ids?\n return pd.concat(dfs)\n\ndef process_whole(df):\n# Destination Facility\tDiposal Ton\tQuarter\tReport Year\tTotal ADC\tTransformation Ton\tcounty_id\n names = {\n 'Destination Facility': 'destination_facility',\n 'Diposal Ton': 'disposal',\n 'Report Year': 'report_year',\n 'Quarter': 'report_quarter',\n 'Total ADC': 'total_adc',\n 'Transformation Ton': 'transformation',\n }\n \n return (df\n .rename(columns=names)\n .fillna(0)\n .astype({'report_quarter': int})\n )",
"_____no_output_____"
],
[
"REPORTS = get_reports()",
"WARNING *** file size (83805) not 512 + multiple of sector size (512)\nWARNING *** file size (9050) not 512 + multiple of sector size (512)\nsingle positional indexer is out-of-bounds\ncounty_id 2 not processed\nWARNING *** file size (24704) not 512 + multiple of sector size (512)\nWARNING *** file size (49724) not 512 + multiple of sector size (512)\nWARNING *** file size (43747) not 512 + multiple of sector size (512)\nWARNING *** file size (42316) not 512 + multiple of sector size (512)\nWARNING *** file size (70569) not 512 + multiple of sector size (512)\nWARNING *** file size (25583) not 512 + multiple of sector size (512)\nWARNING *** file size (39402) not 512 + multiple of sector size (512)\nWARNING *** file size (80939) not 512 + multiple of sector size (512)\nWARNING *** file size (45953) not 512 + multiple of sector size (512)\nWARNING *** file size (22621) not 512 + multiple of sector size (512)\nWARNING *** file size (183139) not 512 + multiple of sector size (512)\nWARNING *** file size (85106) not 512 + multiple of sector size (512)\nWARNING *** file size (234968) not 512 + multiple of sector size (512)\nWARNING *** file size (78668) not 512 + multiple of sector size (512)\nWARNING *** file size (43035) not 512 + multiple of sector size (512)\nWARNING *** file size (67736) not 512 + multiple of sector size (512)\nWARNING *** file size (333252) not 512 + multiple of sector size (512)\nWARNING *** file size (47267) not 512 + multiple of sector size (512)\nWARNING *** file size (47107) not 512 + multiple of sector size (512)\nWARNING *** file size (45719) not 512 + multiple of sector size (512)\nWARNING *** file size (27150) not 512 + multiple of sector size (512)\nWARNING *** file size (62767) not 512 + multiple of sector size (512)\nWARNING *** file size (11179) not 512 + multiple of sector size (512)\nWARNING *** file size (95843) not 512 + multiple of sector size (512)\nWARNING *** file size (85818) not 512 + multiple of sector size (512)\nWARNING *** file size (45908) not 512 + multiple of sector size (512)\nWARNING *** file size (9052) not 512 + multiple of sector size (512)\nsingle positional indexer is out-of-bounds\ncounty_id 29 not processed\nWARNING *** file size (86302) not 512 + multiple of sector size (512)\nWARNING *** file size (45809) not 512 + multiple of sector size (512)\nWARNING *** file size (25917) not 512 + multiple of sector size (512)\nWARNING *** file size (148823) not 512 + multiple of sector size (512)\nWARNING *** file size (67687) not 512 + multiple of sector size (512)\nWARNING *** file size (42643) not 512 + multiple of sector size (512)\nWARNING *** file size (233739) not 512 + multiple of sector size (512)\nWARNING *** file size (134011) not 512 + multiple of sector size (512)\nWARNING *** file size (9052) not 512 + multiple of sector size (512)\nsingle positional indexer is out-of-bounds\ncounty_id 38 not processed\nWARNING *** file size (88313) not 512 + multiple of sector size (512)\nWARNING *** file size (91435) not 512 + multiple of sector size (512)\nWARNING *** file size (57656) not 512 + multiple of sector size (512)\nWARNING *** file size (112570) not 512 + multiple of sector size (512)\nWARNING *** file size (146987) not 512 + multiple of sector size (512)\nWARNING *** file size (85575) not 512 + multiple of sector size (512)\nWARNING *** file size (63895) not 512 + multiple of sector size (512)\nWARNING *** file size (42283) not 512 + multiple of sector size (512)\nWARNING *** file size (41039) not 512 + multiple of sector size (512)\nWARNING *** file size (61905) not 512 + multiple of sector size (512)\nWARNING *** file size (36792) not 512 + multiple of sector size (512)\nWARNING *** file size (72022) not 512 + multiple of sector size (512)\nWARNING *** file size (9052) not 512 + multiple of sector size (512)\nsingle positional indexer is out-of-bounds\ncounty_id 51 not processed\nWARNING *** file size (46547) not 512 + multiple of sector size (512)\nWARNING *** file size (20946) not 512 + multiple of sector size (512)\nWARNING *** file size (79706) not 512 + multiple of sector size (512)\nWARNING *** file size (13449) not 512 + multiple of sector size (512)\nWARNING *** file size (66029) not 512 + multiple of sector size (512)\nWARNING *** file size (60110) not 512 + multiple of sector size (512)\n"
],
[
"REPORTS = REPORTS.pipe(process_whole)\nREPORTS",
"_____no_output_____"
],
[
"REPORTS.to_csv('/data/datasets/catdd/clean/calrecycle-disposal-reporting.csv')",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c27195ce215ad54676c83f33576a13f487945b | 54,533 | ipynb | Jupyter Notebook | tutorials/W3D2_HiddenDynamics/student/W3D2_Tutorial2.ipynb | Beilinson/course-content | b74c630bec7002abe2f827ff8e0707f9bbb43f82 | [
"CC-BY-4.0"
] | null | null | null | tutorials/W3D2_HiddenDynamics/student/W3D2_Tutorial2.ipynb | Beilinson/course-content | b74c630bec7002abe2f827ff8e0707f9bbb43f82 | [
"CC-BY-4.0"
] | null | null | null | tutorials/W3D2_HiddenDynamics/student/W3D2_Tutorial2.ipynb | Beilinson/course-content | b74c630bec7002abe2f827ff8e0707f9bbb43f82 | [
"CC-BY-4.0"
] | null | null | null | 39.176006 | 735 | 0.574019 | [
[
[
"<a href=\"https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D2_HiddenDynamics/student/W3D2_Tutorial2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a> <a href=\"https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W3D2_HiddenDynamics/student/W3D2_Tutorial2.ipynb\" target=\"_parent\"><img src=\"https://kaggle.com/static/images/open-in-kaggle.svg\" alt=\"Open in Kaggle\"/></a>",
"_____no_output_____"
],
[
"# Tutorial 2: Hidden Markov Model\n**Week 3, Day 2: Hidden Dynamics**\n\n**By Neuromatch Academy**\n\n__Content creators:__ Yicheng Fei with help from Jesse Livezey and Xaq Pitkow\n\n__Content reviewers:__ John Butler, Matt Krause, Meenakshi Khosla, Spiros Chavlis, Michael Waskom\n\n__Production editor:__ Ella Batty",
"_____no_output_____"
],
[
"# Tutorial objectives\n\n*Estimated timing of tutorial: 1 hour, 5 minutes*\n\nThe world around us is often changing, but we only have noisy sensory measurements. Similarly, neural systems switch between discrete states (e.g. sleep/wake) which are observable only indirectly, through their impact on neural activity. **Hidden Markov Models** (HMM) let us reason about these unobserved (also called hidden or latent) states using a time series of measurements. \n\nHere we'll learn how changing the HMM's transition probability and measurement noise impacts the data. We'll look at how uncertainty increases as we predict the future, and how to gain information from the measurements.\n\nWe will use a binary latent variable $s_t \\in \\{0,1\\}$ that switches randomly between the two states, and a 1D Gaussian emission model $m_t|s_t \\sim \\mathcal{N}(\\mu_{s_t},\\sigma^2_{s_t})$ that provides evidence about the current state.\n\nBy the end of this tutorial, you should be able to:\n- Describe how the hidden states in a Hidden Markov model evolve over time, both in words, mathematically, and in code\n- Estimate hidden states from data using forward inference in a Hidden Markov model\n- Describe how measurement noise and state transition probabilities affect uncertainty in predictions in the future and the ability to estimate hidden states.\n\n<br>\n\n**Summary of Exercises**\n1. Generate data from an HMM.\n2. Calculate how predictions propagate in a Markov Chain without evidence.\n3. Combine new evidence and prediction from past evidence to estimate hidden states.",
"_____no_output_____"
]
],
[
[
"# @title Video 1: Introduction\nfrom ipywidgets import widgets\n\nout2 = widgets.Output()\nwith out2:\n from IPython.display import IFrame\n class BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\n video = BiliVideo(id=\"BV1Hh411r7JE\", width=854, height=480, fs=1)\n print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))\n display(video)\n\nout1 = widgets.Output()\nwith out1:\n from IPython.display import YouTubeVideo\n video = YouTubeVideo(id=\"pIXxVl1A4l0\", width=854, height=480, fs=1, rel=0)\n print('Video available at https://youtube.com/watch?v=' + video.id)\n display(video)\n\nout = widgets.Tab([out1, out2])\nout.set_title(0, 'Youtube')\nout.set_title(1, 'Bilibili')\n\ndisplay(out)",
"_____no_output_____"
]
],
[
[
"# Setup",
"_____no_output_____"
]
],
[
[
"# Imports\n\nimport numpy as np\nimport time\nfrom scipy import stats\nfrom scipy.optimize import linear_sum_assignment\nfrom collections import namedtuple\n\nimport matplotlib.pyplot as plt\nfrom matplotlib import patches",
"_____no_output_____"
],
[
"#@title Figure Settings\n# import ipywidgets as widgets # interactive display\nfrom IPython.html import widgets\nfrom ipywidgets import interactive, interact, HBox, Layout,VBox\nfrom IPython.display import HTML\n%config InlineBackend.figure_format = 'retina'\nplt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/course-content/NMA2020/nma.mplstyle\")",
"_____no_output_____"
],
[
"# @title Plotting Functions\n\ndef plot_hmm1(model, states, measurements, flag_m=True):\n \"\"\"Plots HMM states and measurements for 1d states and measurements.\n\n Args:\n model (hmmlearn model): hmmlearn model used to get state means.\n states (numpy array of floats): Samples of the states.\n measurements (numpy array of floats): Samples of the states.\n \"\"\"\n T = states.shape[0]\n nsteps = states.size\n aspect_ratio = 2\n fig, ax1 = plt.subplots(figsize=(8,4))\n states_forplot = list(map(lambda s: model.means[s], states))\n ax1.step(np.arange(nstep), states_forplot, \"-\", where=\"mid\", alpha=1.0, c=\"green\")\n ax1.set_xlabel(\"Time\")\n ax1.set_ylabel(\"Latent State\", c=\"green\")\n ax1.set_yticks([-1, 1])\n ax1.set_yticklabels([\"-1\", \"+1\"])\n ax1.set_xticks(np.arange(0,T,10))\n ymin = min(measurements)\n ymax = max(measurements)\n\n ax2 = ax1.twinx()\n ax2.set_ylabel(\"Measurements\", c=\"crimson\")\n\n # show measurement gaussian\n if flag_m:\n ax2.plot([T,T],ax2.get_ylim(), color=\"maroon\", alpha=0.6)\n for i in range(model.n_components):\n mu = model.means[i]\n scale = np.sqrt(model.vars[i])\n rv = stats.norm(mu, scale)\n num_points = 50\n domain = np.linspace(mu-3*scale, mu+3*scale, num_points)\n\n left = np.repeat(float(T), num_points)\n # left = np.repeat(0.0, num_points)\n offset = rv.pdf(domain)\n offset *= T / 15\n lbl = \"measurement\" if i == 0 else \"\"\n # ax2.fill_betweenx(domain, left, left-offset, alpha=0.3, lw=2, color=\"maroon\", label=lbl)\n ax2.fill_betweenx(domain, left+offset, left, alpha=0.3, lw=2, color=\"maroon\", label=lbl)\n ax2.scatter(np.arange(nstep), measurements, c=\"crimson\", s=4)\n ax2.legend(loc=\"upper left\")\n ax1.set_ylim(ax2.get_ylim())\n plt.show(fig)\n\n\ndef plot_marginal_seq(predictive_probs, switch_prob):\n \"\"\"Plots the sequence of marginal predictive distributions.\n\n Args:\n predictive_probs (list of numpy vectors): sequence of predictive probability vectors\n switch_prob (float): Probability of switching states.\n \"\"\"\n T = len(predictive_probs)\n prob_neg = [p_vec[0] for p_vec in predictive_probs]\n prob_pos = [p_vec[1] for p_vec in predictive_probs]\n fig, ax = plt.subplots()\n ax.plot(np.arange(T), prob_neg, color=\"blue\")\n ax.plot(np.arange(T), prob_pos, color=\"orange\")\n ax.legend([\n \"prob in state -1\", \"prob in state 1\"\n ])\n ax.text(T/2, 0.05, \"switching probability={}\".format(switch_prob), fontsize=12,\n bbox=dict(boxstyle=\"round\", facecolor=\"wheat\", alpha=0.6))\n ax.set_xlabel(\"Time\")\n ax.set_ylabel(\"Probability\")\n ax.set_title(\"Forgetting curve in a changing world\")\n #ax.set_aspect(aspect_ratio)\n plt.show(fig)\n\ndef plot_evidence_vs_noevidence(posterior_matrix, predictive_probs):\n \"\"\"Plots the average posterior probabilities with evidence v.s. no evidence\n\n Args:\n posterior_matrix: (2d numpy array of floats): The posterior probabilities in state 1 from evidence (samples, time)\n predictive_probs (numpy array of floats): Predictive probabilities in state 1 without evidence\n \"\"\"\n nsample, T = posterior_matrix.shape\n posterior_mean = posterior_matrix.mean(axis=0)\n fig, ax = plt.subplots(1)\n # ax.plot([0.0, T],[0.5, 0.5], color=\"red\", linestyle=\"dashed\")\n ax.plot([0.0, T],[0., 0.], color=\"red\", linestyle=\"dashed\")\n ax.plot(np.arange(T), predictive_probs, c=\"orange\", linewidth=2, label=\"No evidence\")\n ax.scatter(np.tile(np.arange(T), (nsample, 1)), posterior_matrix, s=0.8, c=\"green\", alpha=0.3, label=\"With evidence(Sample)\")\n ax.plot(np.arange(T), posterior_mean, c='green', linewidth=2, label=\"With evidence(Average)\")\n ax.legend()\n ax.set_yticks([0.0, 0.25, 0.5, 0.75, 1.0])\n ax.set_xlabel(\"Time\")\n ax.set_ylabel(\"Probability in State +1\")\n ax.set_title(\"Gain confidence with evidence\")\n plt.show(fig)\n\n\ndef plot_forward_inference(model, states, measurements, states_inferred,\n predictive_probs, likelihoods, posterior_probs,\n t=None,\n flag_m=True, flag_d=True, flag_pre=True, flag_like=True, flag_post=True,\n ):\n \"\"\"Plot ground truth state sequence with noisy measurements, and ground truth states v.s. inferred ones\n\n Args:\n model (instance of hmmlearn.GaussianHMM): an instance of HMM\n states (numpy vector): vector of 0 or 1(int or Bool), the sequences of true latent states\n measurements (numpy vector of numpy vector): the un-flattened Gaussian measurements at each time point, element has size (1,)\n states_inferred (numpy vector): vector of 0 or 1(int or Bool), the sequences of inferred latent states\n \"\"\"\n T = states.shape[0]\n if t is None:\n t = T-1\n nsteps = states.size\n fig, ax1 = plt.subplots(figsize=(11,6))\n # inferred states\n #ax1.step(np.arange(nstep)[:t+1], states_forplot[:t+1], \"-\", where=\"mid\", alpha=1.0, c=\"orange\", label=\"inferred\")\n # true states\n states_forplot = list(map(lambda s: model.means[s], states))\n ax1.step(np.arange(nstep)[:t+1], states_forplot[:t+1], \"-\", where=\"mid\", alpha=1.0, c=\"green\", label=\"true\")\n ax1.step(np.arange(nstep)[t+1:], states_forplot[t+1:], \"-\", where=\"mid\", alpha=0.3, c=\"green\", label=\"\")\n # Posterior curve\n delta = model.means[1] - model.means[0]\n states_interpolation = model.means[0] + delta * posterior_probs[:,1]\n if flag_post:\n ax1.step(np.arange(nstep)[:t+1], states_interpolation[:t+1], \"-\", where=\"mid\", c=\"grey\", label=\"posterior\")\n\n ax1.set_xlabel(\"Time\")\n ax1.set_ylabel(\"Latent State\", c=\"green\")\n ax1.set_yticks([-1, 1])\n ax1.set_yticklabels([\"-1\", \"+1\"])\n ax1.legend(bbox_to_anchor=(0,1.02,0.2,0.1), borderaxespad=0, ncol=2)\n\n\n\n ax2 = ax1.twinx()\n ax2.set_ylim(\n min(-1.2, np.min(measurements)),\n max(1.2, np.max(measurements))\n )\n if flag_d:\n ax2.scatter(np.arange(nstep)[:t+1], measurements[:t+1], c=\"crimson\", s=4, label=\"measurement\")\n ax2.set_ylabel(\"Measurements\", c=\"crimson\")\n\n # show measurement distributions\n if flag_m:\n for i in range(model.n_components):\n mu = model.means[i]\n scale = np.sqrt(model.vars[i])\n rv = stats.norm(mu, scale)\n num_points = 50\n domain = np.linspace(mu-3*scale, mu+3*scale, num_points)\n\n left = np.repeat(float(T), num_points)\n offset = rv.pdf(domain)\n offset *= T /15\n # lbl = \"measurement\" if i == 0 else \"\"\n lbl = \"\"\n # ax2.fill_betweenx(domain, left, left-offset, alpha=0.3, lw=2, color=\"maroon\", label=lbl)\n ax2.fill_betweenx(domain, left+offset, left, alpha=0.3, lw=2, color=\"maroon\", label=lbl)\n ymin, ymax = ax2.get_ylim()\n width = 0.1 * (ymax-ymin) / 2.0\n centers = [-1.0, 1.0]\n bar_scale = 15\n\n # Predictions\n data = predictive_probs\n if flag_pre:\n for i in range(model.n_components):\n domain = np.array([centers[i]-1.5*width, centers[i]-0.5*width])\n left = np.array([t,t])\n offset = np.array([data[t,i]]*2)\n offset *= bar_scale\n lbl = \"todays prior\" if i == 0 else \"\"\n ax2.fill_betweenx(domain, left+offset, left, alpha=0.3, lw=2, color=\"dodgerblue\", label=lbl)\n\n # Likelihoods\n # data = np.stack([likelihoods, 1.0-likelihoods],axis=-1)\n data = likelihoods\n data /= np.sum(data,axis=-1, keepdims=True)\n if flag_like:\n for i in range(model.n_components):\n domain = np.array([centers[i]+0.5*width, centers[i]+1.5*width])\n left = np.array([t,t])\n offset = np.array([data[t,i]]*2)\n offset *= bar_scale\n lbl = \"likelihood\" if i == 0 else \"\"\n ax2.fill_betweenx(domain, left+offset, left, alpha=0.3, lw=2, color=\"crimson\", label=lbl)\n # Posteriors\n data = posterior_probs\n if flag_post:\n for i in range(model.n_components):\n domain = np.array([centers[i]-0.5*width, centers[i]+0.5*width])\n left = np.array([t,t])\n offset = np.array([data[t,i]]*2)\n offset *= bar_scale\n lbl = \"posterior\" if i == 0 else \"\"\n ax2.fill_betweenx(domain, left+offset, left, alpha=0.3, lw=2, color=\"grey\", label=lbl)\n if t<T-1:\n ax2.plot([t,t],ax2.get_ylim(), color='black',alpha=0.6)\n if flag_pre or flag_like or flag_post:\n ax2.plot([t,t],ax2.get_ylim(), color='black',alpha=0.6)\n\n ax2.legend(bbox_to_anchor=(0.4,1.02,0.6, 0.1), borderaxespad=0, ncol=4)\n ax1.set_ylim(ax2.get_ylim())\n return fig\n # plt.show(fig)",
"_____no_output_____"
]
],
[
[
"---\n# Section 1: Binary HMM with Gaussian measurements\n\nIn contrast to last tutorial, the latent state in an HMM is not fixed, but may switch to a different state at each time step. The time dependence is simple: the probability of the state at time $t$ is wholely determined by the state at time $t-1$. This is called called the **Markov property** and the dependency of the whole state sequence $\\{s_1,...,s_t\\}$ can be described by a chain structure called a Markov Chain. You have seen a Markov chain in the [pre-reqs Statistics day](https://compneuro.neuromatch.io/tutorials/W0D5_Statistics/student/W0D5_Tutorial2.html#section-1-2-markov-chains) and in the [Linear Systems Tutorial 2](https://compneuro.neuromatch.io/tutorials/W2D2_LinearSystems/student/W2D2_Tutorial2.html).\n\n\n**Markov model for binary latent dynamics**\n\nLet's reuse the binary switching process you saw in the [Linear Systems Tutorial 2](https://compneuro.neuromatch.io/tutorials/W2D2_LinearSystems/student/W2D2_Tutorial2.html): our state can be either +1 or -1. The probability of switching to state $s_t=j$ from the previous state $s_{t-1}=i$ is the conditional probability distribution $p(s_t = j| s_{t-1} = i)$. We can summarize these as a $2\\times 2$ matrix we will denote $D$ for Dynamics.\n\n\\begin{align*}\nD = \\begin{bmatrix}p(s_t = +1 | s_{t-1} = +1) & p(s_t = -1 | s_{t-1} = +1)\\\\p(s_t = +1 | s_{t-1} = -1)& p(s_t = -1 | s_{t-1} = -1)\\end{bmatrix}\n\\end{align*}\n\n$D_{ij}$ represents the transition probability to switch from state $i$ to state $j$ at next time step. Please note that this is contrast to the meaning used in the intro and in Linear Systems (their transition matrices are the transpose of ours) but syncs with the [pre-reqs Statistics day](https://compneuro.neuromatch.io/tutorials/W0D5_Statistics/student/W0D5_Tutorial2.html#section-1-2-markov-chains).\n\nWe can represent the probability of the _current_ state as a 2-dimensional vector \n\n$ P_t = [p(s_t = +1), p(s_t = -1)]$\n\n. The entries are the probability that the current state is +1 and the probability that the current state is -1 so these must sum up to 1.\n\nWe then update the probabilities over time following the Markov process:\n\n\\begin{align*}\nP_{t}= P_{t-1}D \\tag{1}\n\\end{align*}\n\nIf you know the state, the entries of $P_{t-1}$ would be either 1 or 0 as there is no uncertainty.\n\n**Measurements**\n\nIn a _Hidden_ Markov model, we cannot directly observe the latent states $s_t$. Instead we get noisy measurements $m_t\\sim p(m|s_t)$.",
"_____no_output_____"
]
],
[
[
"# @title Video 2: Binary HMM with Gaussian measurements\nfrom ipywidgets import widgets\n\nout2 = widgets.Output()\nwith out2:\n from IPython.display import IFrame\n class BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\n video = BiliVideo(id=\"BV1Sw41197Mj\", width=854, height=480, fs=1)\n print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))\n display(video)\n\nout1 = widgets.Output()\nwith out1:\n from IPython.display import YouTubeVideo\n video = YouTubeVideo(id=\"z6KbKILMIPU\", width=854, height=480, fs=1, rel=0)\n print('Video available at https://youtube.com/watch?v=' + video.id)\n display(video)\n\nout = widgets.Tab([out1, out2])\nout.set_title(0, 'Youtube')\nout.set_title(1, 'Bilibili')\n\ndisplay(out)",
"_____no_output_____"
]
],
[
[
"## Coding Exercise 1.1: Simulate a binary HMM with Gaussian measurements\n\nIn this exercise, you will implement a binary HMM with Gaussian measurements. Your HMM will start in State +1 and transition between states (both $-1 \\rightarrow 1$ and $1 \\rightarrow -1$) with probability `switch_prob`. Each state emits measurements drawn from a Gaussian with mean $+1$ for State +1 and mean $-1$ for State -1. The standard deviation of both states is given by `noise_level`.\n\nThe exercises in the next cell have three steps:\n\n**STEP 1**. In `create_HMM`, complete the transition matrix `transmat_` (i.e., $D$) in the code. \n\\begin{equation*}\nD = \n\\begin{pmatrix}\np_{\\rm stay} & p_{\\rm switch} \\\\\np_{\\rm switch} & p_{\\rm stay} \\\\\n\\end{pmatrix}\n\\end{equation*}\nwith $p_{\\rm stay} = 1 - p_{\\rm switch}$. \n\n**STEP 2**. In `create_HMM`, specify gaussian measurements $m_t | s_t$, by specifying the means for each state, and the standard deviation.\n\n**STEP 3**. In `sample`, use the transition matrix to specify the probabilities for the next state $s_t$ given the previous state $s_{t-1}$.\n\n\nIn this exercise, we will use a helper data structure named `GaussianHMM1D`, implemented in the following cell. This allows us to set the information we need about the HMM model (the starting probabilities of state, the transition matrix, the means and variances of the Gaussian distributions, and the number of components) and easily access it. For example, if we can set our model using:\n\n\n```\n model = GaussianHMM1D(\n startprob = startprob_vec,\n transmat = transmat_mat,\n means = means_vec,\n vars = vars_vec,\n n_components = n_components\n )\n```\nand then access the variances as:\n\n```\nmodel.vars\n```\n\nAlso note that we refer to the states as `0` and `1` in the code, instead of as `-1` and `+1`.",
"_____no_output_____"
]
],
[
[
"GaussianHMM1D = namedtuple('GaussianHMM1D', ['startprob', 'transmat','means','vars','n_components'])",
"_____no_output_____"
],
[
"def create_HMM(switch_prob=0.1, noise_level=1e-1, startprob=[1.0, 0.0]):\n \"\"\"Create an HMM with binary state variable and 1D Gaussian measurements\n The probability to switch to the other state is `switch_prob`. Two\n measurement models have mean 1.0 and -1.0 respectively. `noise_level`\n specifies the standard deviation of the measurement models.\n\n Args:\n switch_prob (float): probability to jump to the other state\n noise_level (float): standard deviation of measurement models. Same for\n two components\n\n Returns:\n model (GaussianHMM instance): the described HMM\n \"\"\"\n\n ############################################################################\n # Insert your code here to:\n # * Create the transition matrix, `transmat_mat` so that the odds of\n # switching is `switch_prob`\n #\t\t* Set the measurement model variances, to `noise_level ^ 2` for both\n # states\n raise NotImplementedError(\"`create_HMM` is incomplete\")\n ############################################################################\n\n n_components = 2\n\n startprob_vec = np.asarray(startprob)\n\n # STEP 1: Transition probabilities\n transmat_mat = ... # np.array([[...], [...]])\n\n # STEP 2: Measurement probabilities\n\n # Mean measurements for each state\n means_vec = ...\n\n # Noise for each state\n vars_vec = np.ones(2) * ...\n\n # Initialize model\n model = GaussianHMM1D(\n startprob = startprob_vec,\n transmat = transmat_mat,\n means = means_vec,\n vars = vars_vec,\n n_components = n_components\n )\n\n return model\n\ndef sample(model, T):\n \"\"\"Generate samples from the given HMM\n\n Args:\n model (GaussianHMM1D): the HMM with Gaussian measurement\n T (int): number of time steps to sample\n\n Returns:\n M (numpy vector): the series of measurements\n S (numpy vector): the series of latent states\n\n \"\"\"\n ############################################################################\n # Insert your code here to:\n # * take row i from `model.transmat` to get the transition probabilities\n # from state i to all states\n raise NotImplementedError(\"`sample` is incomplete\")\n ############################################################################\n # Initialize S and M\n S = np.zeros((T,),dtype=int)\n M = np.zeros((T,))\n\n # Calculate initial state\n S[0] = np.random.choice([0,1],p=model.startprob)\n\n # Latent state at time `t` depends on `t-1` and the corresponding transition probabilities to other states\n for t in range(1,T):\n\n # STEP 3: Get vector of probabilities for all possible `S[t]` given a particular `S[t-1]`\n transition_vector = ...\n\n # Calculate latent state at time `t`\n S[t] = np.random.choice([0,1],p=transition_vector)\n\n # Calculate measurements conditioned on the latent states\n # Since measurements are independent of each other given the latent states, we could calculate them as a batch\n means = model.means[S]\n scales = np.sqrt(model.vars[S])\n M = np.random.normal(loc=means, scale=scales, size=(T,))\n\n return M, S\n\n\n# Set random seed\nnp.random.seed(101)\n\n# Set parameters of HMM\nT = 100\nswitch_prob = 0.1\nnoise_level = 2.0\n\n# Create HMM\nmodel = create_HMM(switch_prob=switch_prob, noise_level=noise_level)\n\n# Sample from HMM\nM, S = sample(model,T)\nassert M.shape==(T,)\nassert S.shape==(T,)\n\n# Print values\nprint(M[:5])\nprint(S[:5])",
"_____no_output_____"
]
],
[
[
"[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_HiddenDynamics/solutions/W3D2_Tutorial2_Solution_76573dcd.py)\n\n",
"_____no_output_____"
],
[
"You should see that the first five measurements are:\n \n `[-3.09355908 1.58552915 -3.93502804 -1.98819072 -1.32506947]`\n\n while the first five states are:\n\n `[0 0 0 0 0]`",
"_____no_output_____"
],
[
"## Interactive Demo 1.2: Binary HMM\n\nIn the demo below, we simulate and plot a similar HMM. You can change the probability of switching states and the noise level (the standard deviation of the Gaussian distributions for measurements). You can click the empty box to also visualize the measurements.\n\n**First**, think about and discuss these questions:\n\n1. What will the states do if the switching probability is zero? One?\n2. What will measurements look like with high noise? Low?\n\n\n\n**Then**, play with the demo to see if you were correct or not.",
"_____no_output_____"
]
],
[
[
"#@title\n\n#@markdown Execute this cell to enable the widget!\n\nnstep = 100\n\[email protected]\ndef plot_samples_widget(\n switch_prob=widgets.FloatSlider(min=0.0, max=1.0, step=0.02, value=0.1),\n log10_noise_level=widgets.FloatSlider(min=-1., max=1., step=.01, value=-0.3),\n flag_m=widgets.Checkbox(value=False, description='measurements', disabled=False, indent=False)\n ):\n np.random.seed(101)\n model = create_HMM(switch_prob=switch_prob,\n noise_level=10.**log10_noise_level)\n print(model)\n observations, states = sample(model, nstep)\n plot_hmm1(model, states, observations, flag_m=flag_m)",
"_____no_output_____"
]
],
[
[
"[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_HiddenDynamics/solutions/W3D2_Tutorial2_Solution_507ce9e9.py)\n\n",
"_____no_output_____"
]
],
[
[
"# @title Video 3: Section 1 Exercises Discussion\nfrom ipywidgets import widgets\n\nout2 = widgets.Output()\nwith out2:\n from IPython.display import IFrame\n class BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\n video = BiliVideo(id=\"BV1dX4y1F7Fq\", width=854, height=480, fs=1)\n print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))\n display(video)\n\nout1 = widgets.Output()\nwith out1:\n from IPython.display import YouTubeVideo\n video = YouTubeVideo(id=\"bDDRgAvQeFA\", width=854, height=480, fs=1, rel=0)\n print('Video available at https://youtube.com/watch?v=' + video.id)\n display(video)\n\nout = widgets.Tab([out1, out2])\nout.set_title(0, 'Youtube')\nout.set_title(1, 'Bilibili')\n\ndisplay(out)",
"_____no_output_____"
]
],
[
[
"\n**Applications**. Measurements could be:\n* fish caught at different times as the school of fish moves from left to right\n* membrane voltage when an ion channel changes between open and closed\n* EEG frequency measurements as the brain moves between sleep states\n\nWhat phenomena can you imagine modeling with these HMMs?",
"_____no_output_____"
],
[
"----\n\n# Section 2: Predicting the future in an HMM\n\n\n*Estimated timing to here from start of tutorial: 20 min*\n\n\n\n",
"_____no_output_____"
]
],
[
[
"# @title Video 4: Forgetting in a changing world\nfrom ipywidgets import widgets\n\nout2 = widgets.Output()\nwith out2:\n from IPython.display import IFrame\n class BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\n video = BiliVideo(id=\"BV1o64y1s7M7\", width=854, height=480, fs=1)\n print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))\n display(video)\n\nout1 = widgets.Output()\nwith out1:\n from IPython.display import YouTubeVideo\n video = YouTubeVideo(id=\"XOec560m61o\", width=854, height=480, fs=1, rel=0)\n print('Video available at https://youtube.com/watch?v=' + video.id)\n display(video)\n\nout = widgets.Tab([out1, out2])\nout.set_title(0, 'Youtube')\nout.set_title(1, 'Bilibili')\n\ndisplay(out)",
"_____no_output_____"
]
],
[
[
"### Interactive Demo 2.1: Forgetting in a changing world\n\n\nEven if we know the world state for sure, the world changes. We become less and less certain as time goes by since our last measurement. In this exercise, we'll see how a Hidden Markov Model gradually \"forgets\" the current state when predicting the future without measurements.\n\nAssume we know that the initial state is -1, $s_0=-1$, so $p(s_0)=[1,0]$. We will plot $p(s_t)$ versus time.\n\n1. Examine helper function `simulate_prediction_only` and understand how the predicted distribution changes over time.\n\n2. Using our provided code, plot this distribution over time, and manipulate the process dynamics via the slider controlling the switching probability.\n\nDo you forget more quickly with low or high switching probability? Why? How does the curve look when `prob_switch` $>0.5$? Why?\n\n\n\n",
"_____no_output_____"
]
],
[
[
"# @markdown Execute this cell to enable helper function `simulate_prediction_only`\n\ndef simulate_prediction_only(model, nstep):\n \"\"\"\n Simulate the diffusion of HMM with no observations\n\n Args:\n model (GaussianHMM1D instance): the HMM instance\n nstep (int): total number of time steps to simulate(include initial time)\n\n Returns:\n predictive_probs (list of numpy vector): the list of marginal probabilities\n \"\"\"\n entropy_list = []\n predictive_probs = []\n prob = model.startprob\n for i in range(nstep):\n\n # Log probabilities\n predictive_probs.append(prob)\n\n # One step forward\n prob = prob @ model.transmat\n\n return predictive_probs",
"_____no_output_____"
],
[
"# @markdown Execute this cell to enable the widget!\n\nnp.random.seed(101)\nT = 100\nnoise_level = 0.5\n\[email protected](switch_prob=widgets.FloatSlider(min=0.0, max=1.0, step=0.01, value=0.1))\ndef plot(switch_prob=switch_prob):\n model = create_HMM(switch_prob=switch_prob, noise_level=noise_level)\n predictive_probs = simulate_prediction_only(model, T)\n plot_marginal_seq(predictive_probs, switch_prob)",
"_____no_output_____"
]
],
[
[
"[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_HiddenDynamics/solutions/W3D2_Tutorial2_Solution_8357dee2.py)\n\n",
"_____no_output_____"
]
],
[
[
"# @title Video 5: Section 2 Exercise Discussion\nfrom ipywidgets import widgets\n\nout2 = widgets.Output()\nwith out2:\n from IPython.display import IFrame\n class BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\n video = BiliVideo(id=\"BV1DM4y1K7tK\", width=854, height=480, fs=1)\n print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))\n display(video)\n\nout1 = widgets.Output()\nwith out1:\n from IPython.display import YouTubeVideo\n video = YouTubeVideo(id=\"GRnlvxZ_ozk\", width=854, height=480, fs=1, rel=0)\n print('Video available at https://youtube.com/watch?v=' + video.id)\n display(video)\n\nout = widgets.Tab([out1, out2])\nout.set_title(0, 'Youtube')\nout.set_title(1, 'Bilibili')\n\ndisplay(out)",
"_____no_output_____"
]
],
[
[
"# Section 3: Forward inference in an HMM\n\n*Estimated timing to here from start of tutorial: 35 min*",
"_____no_output_____"
]
],
[
[
"# @title Video 6: Inference in an HMM\nfrom ipywidgets import widgets\n\nout2 = widgets.Output()\nwith out2:\n from IPython.display import IFrame\n class BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\n video = BiliVideo(id=\"BV17f4y1571y\", width=854, height=480, fs=1)\n print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))\n display(video)\n\nout1 = widgets.Output()\nwith out1:\n from IPython.display import YouTubeVideo\n video = YouTubeVideo(id=\"fErhvxE9SHs\", width=854, height=480, fs=1, rel=0)\n print('Video available at https://youtube.com/watch?v=' + video.id)\n display(video)\n\nout = widgets.Tab([out1, out2])\nout.set_title(0, 'Youtube')\nout.set_title(1, 'Bilibili')\n\ndisplay(out)",
"_____no_output_____"
]
],
[
[
"### Coding Exercise 3.1: Forward inference of HMM\n\nAs a recursive algorithm, let's assume we already have yesterday's posterior from time $t-1$: $p(s_{t-1}|m_{1:t-1})$. When the new data $m_{t}$ comes in, the algorithm performs the following steps:\n\n* **Predict**: transform yesterday's posterior over $s_{t-1}$ into today's prior over $s_t$ using the transition matrix $D$:\n\n$$\\text{today's prior}=p(s_t|m_{1:t-1})= p(s_{t-1}|m_{1:t-1}) D$$\n\n* **Update**: Incorporate measurement $m_t$ to calculate the posterior $p(s_t|m_{0:t})$\n\n$$\\text{posterior} \\propto \\text{prior}\\cdot \\text{likelihood}=p(m_t|s_t)p(s_t|m_{0:t-1})$$\n\nIn this exercise, you will:\n\n* STEP 1: Complete the code in function `markov_forward` to calculate the predictive marginal distribution at next time step\n\n* STEP 2: Complete the code in function `one_step_update` to combine predictive probabilities and data likelihood into a new posterior\n * Hint: We have provided a function to calculate the likelihood of $m_t$ under the two possible states: `compute_likelihood(model,M_t)`.\n\n* STEP 3: Using code we provide, plot the posterior and compare with the true values \n\nThe complete forward inference is implemented in `simulate_forward_inference` which just calls `one_step_update` recursively.\n\n\n",
"_____no_output_____"
]
],
[
[
"# @markdown Execute to enable helper functions `compute_likelihood` and `simulate_forward_inference`\n\ndef compute_likelihood(model, M):\n \"\"\"\n Calculate likelihood of seeing data `M` for all measurement models\n\n Args:\n model (GaussianHMM1D): HMM\n M (float or numpy vector)\n\n Returns:\n L (numpy vector or matrix): the likelihood\n \"\"\"\n rv0 = stats.norm(model.means[0], np.sqrt(model.vars[0]))\n rv1 = stats.norm(model.means[1], np.sqrt(model.vars[1]))\n L = np.stack([rv0.pdf(M), rv1.pdf(M)],axis=0)\n if L.size==2:\n L = L.flatten()\n return L\n\n\ndef simulate_forward_inference(model, T, data=None):\n \"\"\"\n Given HMM `model`, calculate posterior marginal predictions of x_t for T-1 time steps ahead based on\n evidence `data`. If `data` is not give, generate a sequence of measurements from first component.\n\n Args:\n model (GaussianHMM instance): the HMM\n T (int): length of returned array\n\n Returns:\n predictive_state1: predictive probabilities in first state w.r.t no evidence\n posterior_state1: posterior probabilities in first state w.r.t evidence\n \"\"\"\n\n # First re-calculate hte predictive probabilities without evidence\n # predictive_probs = simulate_prediction_only(model, T)\n predictive_probs = np.zeros((T,2))\n likelihoods = np.zeros((T,2))\n posterior_probs = np.zeros((T, 2))\n # Generate an measurement trajectory condtioned on that latent state x is always 1\n if data is not None:\n M = data\n else:\n M = np.random.normal(model.means[0], np.sqrt(model.vars[0]), (T,))\n\n # Calculate marginal for each latent state x_t\n predictive_probs[0,:] = model.startprob\n likelihoods[0,:] = compute_likelihood(model, M[[0]])\n posterior = predictive_probs[0,:] * likelihoods[0,:]\n posterior /= np.sum(posterior)\n posterior_probs[0,:] = posterior\n\n for t in range(1, T):\n prediction, likelihood, posterior = one_step_update(model, posterior_probs[t-1], M[[t]])\n # normalize and add to the list\n posterior /= np.sum(posterior)\n predictive_probs[t,:] = prediction\n likelihoods[t,:] = likelihood\n posterior_probs[t,:] = posterior\n return predictive_probs, likelihoods, posterior_probs\n\nhelp(compute_likelihood)\nhelp(simulate_forward_inference)",
"_____no_output_____"
],
[
"def markov_forward(p0, D):\n \"\"\"Calculate the forward predictive distribution in a discrete Markov chain\n\n Args:\n p0 (numpy vector): a discrete probability vector\n D (numpy matrix): the transition matrix, D[i,j] means the prob. to\n switch FROM i TO j\n\n Returns:\n p1 (numpy vector): the predictive probabilities in next time step\n \"\"\"\n ##############################################################################\n # Insert your code here to:\n # 1. Calculate the predicted probabilities at next time step using the\n # probabilities at current time and the transition matrix\n raise NotImplementedError(\"`markov_forward` is incomplete\")\n ##############################################################################\n\n # Calculate predictive probabilities (prior)\n p1 = ...\n\n return p1\n\ndef one_step_update(model, posterior_tm1, M_t):\n \"\"\"Given a HMM model, calculate the one-time-step updates to the posterior.\n Args:\n model (GaussianHMM1D instance): the HMM\n posterior_tm1 (numpy vector): Posterior at `t-1`\n M_t (numpy array): measurement at `t`\n\n Returns:\n posterior_t (numpy array): Posterior at `t`\n \"\"\"\n ##############################################################################\n # Insert your code here to:\n # 1. Call function `markov_forward` to calculate the prior for next time\n # step\n # 2. Calculate likelihood of seeing current data `M_t` under both states\n # as a vector.\n # 3. Calculate the posterior which is proportional to\n # likelihood x prediction elementwise,\n # 4. Don't forget to normalize\n raise NotImplementedError(\"`one_step_update` is incomplete\")\n ##############################################################################\n\n # Calculate predictive probabilities (prior)\n prediction = markov_forward(...)\n\n # Get the likelihood\n likelihood = compute_likelihood(...)\n\n # Calculate posterior\n posterior_t = ...\n\n # Normalize\n posterior_t /= ...\n\n return prediction, likelihood, posterior_t\n\n\n# Set random seed\nnp.random.seed(12)\n\n# Set parameters\nswitch_prob = 0.4\nnoise_level = .4\nt = 75\n\n# Create and sample from model\nmodel = create_HMM(switch_prob = switch_prob,\n noise_level = noise_level,\n startprob=[0.5, 0.5])\n\nmeasurements, states = sample(model, nstep)\n\n# Infer state sequence\npredictive_probs, likelihoods, posterior_probs = simulate_forward_inference(model, nstep,\n measurements)\nstates_inferred = np.asarray(posterior_probs[:,0] <= 0.5, dtype=int)\n\n# Visualize\nplot_forward_inference(\n model, states, measurements, states_inferred,\n predictive_probs, likelihoods, posterior_probs,t=t, flag_m = 0\n )",
"_____no_output_____"
]
],
[
[
"[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D2_HiddenDynamics/solutions/W3D2_Tutorial2_Solution_69ce2879.py)\n\n*Example output:*\n\n<img alt='Solution hint' align='left' width=1532.0 height=825.0 src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W3D2_HiddenDynamics/static/W3D2_Tutorial2_Solution_69ce2879_0.png>\n\n",
"_____no_output_____"
],
[
"## Interactive Demo 3.2: Forward inference in binary HMM\n\nNow visualize your inference algorithm. Play with the sliders and checkboxes to help you gain intuition. \n\n* Use the sliders `switch_prob` and `log10_noise_level` to change the switching probability and measurement noise level.\n\n* Use the slider `t` to view prediction (prior) probabilities, likelihood, and posteriors at different times.\n\nWhen does the inference make a mistake? For example, set `switch_prob=0.1`, `log_10_noise_level=-0.2`, and take a look at the probabilities at time `t=2`.",
"_____no_output_____"
]
],
[
[
"# @markdown Execute this cell to enable the demo\n\nnstep = 100\n\[email protected]\ndef plot_forward_inference_widget(\n switch_prob=widgets.FloatSlider(min=0.0, max=1.0, step=0.01, value=0.05),\n log10_noise_level=widgets.FloatSlider(min=-1., max=1., step=.01, value=0.1),\n t=widgets.IntSlider(min=0, max=nstep-1, step=1, value=nstep//2),\n #flag_m=widgets.Checkbox(value=True, description='measurement distribution', disabled=False, indent=False),\n flag_d=widgets.Checkbox(value=True, description='measurements', disabled=False, indent=False),\n flag_pre=widgets.Checkbox(value=True, description='todays prior', disabled=False, indent=False),\n flag_like=widgets.Checkbox(value=True, description='likelihood', disabled=False, indent=False),\n flag_post=widgets.Checkbox(value=True, description='posterior', disabled=False, indent=False),\n ):\n\n np.random.seed(102)\n\n # global model, measurements, states, states_inferred, predictive_probs, likelihoods, posterior_probs\n model = create_HMM(switch_prob=switch_prob,\n noise_level=10.**log10_noise_level,\n startprob=[0.5, 0.5])\n measurements, states = sample(model, nstep)\n\n # Infer state sequence\n predictive_probs, likelihoods, posterior_probs = simulate_forward_inference(model, nstep,\n measurements)\n states_inferred = np.asarray(posterior_probs[:,0] <= 0.5, dtype=int)\n\n fig = plot_forward_inference(\n model, states, measurements, states_inferred,\n predictive_probs, likelihoods, posterior_probs,t=t,\n flag_m=0,\n flag_d=flag_d,flag_pre=flag_pre,flag_like=flag_like,flag_post=flag_post\n )\n plt.show(fig)",
"_____no_output_____"
],
[
"# @title Video 7: Section 3 Exercise Discussion\nfrom ipywidgets import widgets\n\nout2 = widgets.Output()\nwith out2:\n from IPython.display import IFrame\n class BiliVideo(IFrame):\n def __init__(self, id, page=1, width=400, height=300, **kwargs):\n self.id=id\n src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)\n super(BiliVideo, self).__init__(src, width, height, **kwargs)\n\n video = BiliVideo(id=\"BV1EM4y1T7cB\", width=854, height=480, fs=1)\n print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))\n display(video)\n\nout1 = widgets.Output()\nwith out1:\n from IPython.display import YouTubeVideo\n video = YouTubeVideo(id=\"CNrjxNedqV0\", width=854, height=480, fs=1, rel=0)\n print('Video available at https://youtube.com/watch?v=' + video.id)\n display(video)\n\nout = widgets.Tab([out1, out2])\nout.set_title(0, 'Youtube')\nout.set_title(1, 'Bilibili')\n\ndisplay(out)",
"_____no_output_____"
]
],
[
[
"---\n# Summary\n\n*Estimated timing of tutorial: 1 hour, 5 minutes*\n\nIn this tutorial, you\n\n* Simulated the dynamics of the hidden state in a Hidden Markov model and visualized the measured data (Section 1)\n* Explored how uncertainty in a future hidden state changes based on the probabilities of switching between states (Section 2)\n* Estimated hidden states from the measurements using forward inference, connected this to Bayesian ideas, and explored the effects of noise and transition matrix probabilities on this process (Section 3)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
d0c27ac3f66d1abdd249a65771f9bc0a85a47e7c | 14,679 | ipynb | Jupyter Notebook | teacher/seaborn_pre-checkpoint.ipynb | creamcheesesteak/test_machinelearning | 9921796e853bef6a05b9a08b698aedcba436bfc9 | [
"Apache-2.0"
] | null | null | null | teacher/seaborn_pre-checkpoint.ipynb | creamcheesesteak/test_machinelearning | 9921796e853bef6a05b9a08b698aedcba436bfc9 | [
"Apache-2.0"
] | null | null | null | teacher/seaborn_pre-checkpoint.ipynb | creamcheesesteak/test_machinelearning | 9921796e853bef6a05b9a08b698aedcba436bfc9 | [
"Apache-2.0"
] | null | null | null | 32.261538 | 125 | 0.334219 | [
[
[
"import seaborn as sns",
"_____no_output_____"
],
[
"df = sns.load_dataset('titanic')",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"df.describe",
"_____no_output_____"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 891 entries, 0 to 890\nData columns (total 15 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 survived 891 non-null int64 \n 1 pclass 891 non-null int64 \n 2 sex 891 non-null object \n 3 age 714 non-null float64 \n 4 sibsp 891 non-null int64 \n 5 parch 891 non-null int64 \n 6 fare 891 non-null float64 \n 7 embarked 889 non-null object \n 8 class 891 non-null category\n 9 who 891 non-null object \n 10 adult_male 891 non-null bool \n 11 deck 203 non-null category\n 12 embark_town 889 non-null object \n 13 alive 891 non-null object \n 14 alone 891 non-null bool \ndtypes: bool(2), category(2), float64(2), int64(4), object(5)\nmemory usage: 80.6+ KB\n"
]
],
[
[
"### pclass, sex, sibsp, parch, embarked, class, who, adult_male, embark_town, alive, alone",
"_____no_output_____"
]
]
] | [
"code",
"markdown"
] | [
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0c287e030f7d4f768d1fd1a82000072d7febc08 | 374,968 | ipynb | Jupyter Notebook | examples/whitebox.ipynb | MarceloClaro/whitebox-python | 7b62cabb8323c2fad2c96ebbe0c7bc061d1b5810 | [
"MIT"
] | 1 | 2021-08-09T14:07:40.000Z | 2021-08-09T14:07:40.000Z | examples/whitebox.ipynb | wobrotson/whitebox-python | 584ef79dc87ea772fbc2e0b0b9288d4393d73ca0 | [
"MIT"
] | null | null | null | examples/whitebox.ipynb | wobrotson/whitebox-python | 584ef79dc87ea772fbc2e0b0b9288d4393d73ca0 | [
"MIT"
] | 1 | 2021-01-13T19:51:00.000Z | 2021-01-13T19:51:00.000Z | 288.436923 | 230,608 | 0.915574 | [
[
[
"# A tutorial for the whitebox Python package\n\nThis notebook demonstrates the usage of the **whitebox** Python package for geospatial analysis, which is built on a stand-alone executable command-line program called [WhiteboxTools](https://github.com/jblindsay/whitebox-tools).\n\n* Authors: Dr. John Lindsay (https://jblindsay.github.io/ghrg/index.html)\n* Contributors: Dr. Qiusheng Wu (https://wetlands.io)\n* GitHub repo: https://github.com/giswqs/whitebox-python\n* WhiteboxTools: https://github.com/jblindsay/whitebox-tools\n* User Manual: https://jblindsay.github.io/wbt_book\n* PyPI: https://pypi.org/project/whitebox/\n* Documentation: https://whitebox.readthedocs.io\n* Binder: https://gishub.org/whitebox-cloud\n* Free software: [MIT license](https://opensource.org/licenses/MIT)\n\nThis tutorial can be accessed in three ways:\n\n* HTML version: https://gishub.org/whitebox-html\n* Viewable Notebook: https://gishub.org/whitebox-notebook\n* Interactive Notebook: https://gishub.org/whitebox-cloud\n\n**Launch this tutorial as an interactive Jupyter Notebook on the cloud - [MyBinder.org](https://gishub.org/whitebox-cloud).**\n\n",
"_____no_output_____"
],
[
"## Table of Content\n\n* [Installation](#Installation)\n* [About whitebox](#About-whitebox)\n* [Getting data](#Getting-data)\n* [Using whitebox](#Using-whitebox)\n* [Displaying results](#Displaying-results)\n* [whitebox GUI](#whitebox-GUI)\n* [Citing whitebox](#Citing-whitebox)\n* [Credits](#Credits)\n* [Contact](#Contact)\n",
"_____no_output_____"
],
[
"## Installation\n\n\n**whitebox** supports a variety of platforms, including Microsoft Windows, macOS, and Linux operating systems. Note that you will need to have **Python 3.x** installed. Python 2.x is not supported. The **whitebox** Python package can be installed using the following command:\n\n`pip install whitebox`\n\nIf you have installed **whitebox** Python package before and want to upgrade to the latest version, you can use the following command:\n\n`pip install whitebox -U`\n\nIf you encounter any installation issues, please check [Troubleshooting](https://github.com/giswqs/whitebox#troubleshooting) on the **whitebox** GitHub page and [Report Bugs](https://github.com/giswqs/whitebox#reporting-bugs).",
"_____no_output_____"
],
[
"## About whitebox\n\n**import whitebox and call WhiteboxTools()**",
"_____no_output_____"
]
],
[
[
"import whitebox\nwbt = whitebox.WhiteboxTools()",
"_____no_output_____"
]
],
[
[
"**Prints the whitebox-tools help...a listing of available commands**",
"_____no_output_____"
]
],
[
[
"print(wbt.help())",
"WhiteboxTools Help\n\nThe following commands are recognized:\n--cd, --wd Changes the working directory; used in conjunction with --run flag.\n-h, --help Prints help information.\n-l, --license Prints the whitebox-tools license.\n--listtools Lists all available tools. Keywords may also be used, --listtools slope.\n-r, --run Runs a tool; used in conjuction with --wd flag; -r=\"LidarInfo\".\n--toolbox Prints the toolbox associated with a tool; --toolbox=Slope.\n--toolhelp Prints the help associated with a tool; --toolhelp=\"LidarInfo\".\n--toolparameters Prints the parameters (in json form) for a specific tool; --toolparameters=\"LidarInfo\".\n-v Verbose mode. Without this flag, tool outputs will not be printed.\n--viewcode Opens the source code of a tool in a web browser; --viewcode=\"LidarInfo\".\n--version Prints the version information.\n\nExample Usage:\n>> ./whitebox-tools -r=lidar_info --cd=\"/path/to/data/\" -i=input.las --vlr --geokeys\n\n\n"
]
],
[
[
"**Prints the whitebox-tools license**",
"_____no_output_____"
]
],
[
[
"print(wbt.license())",
"WhiteboxTools License\nCopyright 2017-2019 John Lindsay\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and\nassociated documentation files (the \"Software\"), to deal in the Software without restriction,\nincluding without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,\nand/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so,\nsubject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial\nportions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT\nNOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND\nNONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES\nOR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN\nCONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\n"
]
],
[
[
"**Prints the whitebox-tools version**",
"_____no_output_____"
]
],
[
[
"print(\"Version information: {}\".format(wbt.version()))",
"Version information: WhiteboxTools v1.0.0 by Dr. John B. Lindsay (c) 2017-2019\n\nWhiteboxTools is an advanced geospatial data analysis platform developed at\nthe University of Guelph's Geomorphometry and Hydrogeomatics Research \nGroup (GHRG). See https://jblindsay.github.io/ghrg/WhiteboxTools/index.html\nfor more details.\n\n"
]
],
[
[
"**Print the help for a specific tool.**",
"_____no_output_____"
]
],
[
[
"print(wbt.tool_help(\"ElevPercentile\"))",
"ElevPercentile\nDescription:\nCalculates the elevation percentile raster from a DEM.\nToolbox: Geomorphometric Analysis\nParameters:\n\nFlag Description\n----------------- -----------\n-i, --input, --dem Input raster DEM file.\n-o, --output Output raster file.\n--filterx Size of the filter kernel in the x-direction.\n--filtery Size of the filter kernel in the y-direction.\n--sig_digits Number of significant digits.\n\n\nExample usage:\n>>./whitebox_tools -r=ElevPercentile -v --wd=\"/path/to/data/\" --dem=DEM.tif -o=output.tif --filter=25\n\n\n"
]
],
[
[
"**Tool names in the whitebox Python package can be called either using the snake_case or CamelCase convention (e.g. lidar_info or LidarInfo). The example below uses snake_case.** ",
"_____no_output_____"
]
],
[
[
"import os, pkg_resources\n\n# identify the sample data directory of the package\ndata_dir = os.path.dirname(pkg_resources.resource_filename(\"whitebox\", 'testdata/'))\n\n# set whitebox working directory\nwbt.set_working_dir(data_dir)\nwbt.verbose = False\n\n# call whiteboxtools\nwbt.feature_preserving_smoothing(\"DEM.tif\", \"smoothed.tif\", filter=9)\nwbt.breach_depressions(\"smoothed.tif\", \"breached.tif\")\nwbt.d_inf_flow_accumulation(\"breached.tif\", \"flow_accum.tif\")",
"_____no_output_____"
]
],
[
[
"**You can search tools using keywords. For example, the script below searches and lists tools with 'lidar' or 'LAS' in tool name or description.**",
"_____no_output_____"
]
],
[
[
"lidar_tools = wbt.list_tools(['lidar', 'LAS'])\nfor index, tool in enumerate(lidar_tools):\n print(\"{} {}: {} ...\".format(str(index+1).zfill(3), tool, lidar_tools[tool][:45]))",
"001 classify_overlap_points: Classifies or filters LAS points in regions o ...\n002 clip_lidar_to_polygon: Clips a LiDAR point cloud to a vector polygon ...\n003 erase_polygon_from_lidar: Erases (cuts out) a vector polygon or polygon ...\n004 filter_lidar_classes: Removes points in a LAS file with certain spe ...\n005 filter_lidar_scan_angles: Removes points in a LAS file with scan angles ...\n006 find_flightline_edge_points: Identifies points along a flightline's edge i ...\n007 find_patch_or_class_edge_cells: Finds all cells located on the edge of patch ...\n008 flightline_overlap: Reads a LiDAR (LAS) point file and outputs a ...\n009 las_to_ascii: Converts one or more LAS files into ASCII tex ...\n010 las_to_multipoint_shapefile: Converts one or more LAS files into Multipoin ...\n011 las_to_shapefile: Converts one or more LAS files into a vector ...\n012 lidar_block_maximum: Creates a block-maximum raster from an input ...\n013 lidar_block_minimum: Creates a block-minimum raster from an input ...\n014 lidar_classify_subset: Classifies the values in one LiDAR point clou ...\n015 lidar_colourize: Adds the red-green-blue colour fields of a Li ...\n016 lidar_construct_vector_tin: Creates a vector triangular irregular network ...\n017 lidar_elevation_slice: Outputs all of the points within a LiDAR (LAS ...\n018 lidar_ground_point_filter: Identifies ground points within LiDAR dataset ...\n019 lidar_hex_binning: Hex-bins a set of LiDAR points. ...\n020 lidar_hillshade: Calculates a hillshade value for points withi ...\n021 lidar_histogram: Creates a histogram of LiDAR data. ...\n022 lidar_idw_interpolation: Interpolates LAS files using an inverse-dista ...\n023 lidar_info: Prints information about a LiDAR (LAS) datase ...\n024 lidar_join: Joins multiple LiDAR (LAS) files into a singl ...\n025 lidar_kappa_index: Performs a kappa index of agreement (KIA) ana ...\n026 lidar_nearest_neighbour_gridding: Grids LAS files using nearest-neighbour schem ...\n027 lidar_point_density: Calculates the spatial pattern of point densi ...\n028 lidar_point_stats: Creates several rasters summarizing the distr ...\n029 lidar_ransac_planes: Removes outliers (high and low points) in a L ...\n030 lidar_remove_duplicates: Removes duplicate points from a LiDAR data se ...\n031 lidar_remove_outliers: Removes outliers (high and low points) in a L ...\n032 lidar_segmentation: Segments a LiDAR point cloud based on normal ...\n033 lidar_segmentation_based_filter: Identifies ground points within LiDAR point c ...\n034 lidar_tin_gridding: Creates a raster grid based on a Delaunay tri ...\n035 lidar_thin: Thins a LiDAR point cloud, reducing point den ...\n036 lidar_thin_high_density: Thins points from high density areas within a ...\n037 lidar_tile: Tiles a LiDAR LAS file into multiple LAS file ...\n038 lidar_tile_footprint: Creates a vector polygon of the convex hull o ...\n039 lidar_tophat_transform: Performs a white top-hat transform on a Lidar ...\n040 normal_vectors: Calculates normal vectors for points within a ...\n041 pennock_landform_class: Classifies hillslope zones based on slope, pr ...\n042 raster_area: Calculates the area of polygons or classes wi ...\n043 raster_cell_assignment: Assign row or column number to cells. ...\n044 reclass: Reclassifies the values in a raster image. ...\n045 reclass_equal_interval: Reclassifies the values in a raster image bas ...\n046 reclass_from_file: Reclassifies the values in a raster image usi ...\n047 select_tiles_by_polygon: Copies LiDAR tiles overlapping with a polygon ...\n048 shape_complexity_index_raster: Calculates the complexity of raster polygons ...\n049 stream_link_class: Identifies the exterior/interior links and no ...\n"
]
],
[
[
"**List all available tools in whitebox-tools**. Currently, **whitebox** contains 372 tools. More tools will be added as they become available.",
"_____no_output_____"
]
],
[
[
"all_tools = wbt.list_tools()\nfor index, tool in enumerate(all_tools):\n print(\"{} {}: {} ...\".format(str(index+1).zfill(3), tool, all_tools[tool][:45]))",
"001 absolute_value: Calculates the absolute value of every cell i ...\n002 adaptive_filter: Performs an adaptive filter on an image. ...\n003 add: Performs an addition operation on two rasters ...\n004 add_point_coordinates_to_table: Modifies the attribute table of a point vecto ...\n005 aggregate_raster: Aggregates a raster to a lower resolution. ...\n006 and: Performs a logical AND operator on two Boolea ...\n007 anova: Performs an analysis of variance (ANOVA) test ...\n008 arc_cos: Returns the inverse cosine (arccos) of each v ...\n009 arc_sin: Returns the inverse sine (arcsin) of each val ...\n010 arc_tan: Returns the inverse tangent (arctan) of each ...\n011 aspect: Calculates an aspect raster from an input DEM ...\n012 atan2: Returns the 2-argument inverse tangent (atan2 ...\n013 attribute_correlation: Performs a correlation analysis on attribute ...\n014 attribute_histogram: Creates a histogram for the field values of a ...\n015 attribute_scattergram: Creates a scattergram for two field values of ...\n016 average_flowpath_slope: Measures the average slope gradient from each ...\n017 average_normal_vector_angular_deviation: Calculates the circular variance of aspect at ...\n018 average_overlay: Calculates the average for each grid cell fro ...\n019 average_upslope_flowpath_length: Measures the average length of all upslope fl ...\n020 balance_contrast_enhancement: Performs a balance contrast enhancement on a ...\n021 basins: Identifies drainage basins that drain to the ...\n022 bilateral_filter: A bilateral filter is an edge-preserving smoo ...\n023 block_maximum_gridding: Creates a raster grid based on a set of vecto ...\n024 block_minimum_gridding: Creates a raster grid based on a set of vecto ...\n025 boundary_shape_complexity: Calculates the complexity of the boundaries o ...\n026 breach_depressions: Breaches all of the depressions in a DEM usin ...\n027 breach_single_cell_pits: Removes single-cell pits from an input DEM by ...\n028 buffer_raster: Maps a distance-based buffer around each non- ...\n029 ceil: Returns the smallest (closest to negative inf ...\n030 centroid: Calculates the centroid, or average location, ...\n031 centroid_vector: Identifes the centroid point of a vector poly ...\n032 change_vector_analysis: Performs a change vector analysis on a two-da ...\n033 circular_variance_of_aspect: Calculates the circular variance of aspect at ...\n034 classify_overlap_points: Classifies or filters LAS points in regions o ...\n035 clean_vector: Removes null features and lines/polygons with ...\n036 clip: Extract all the features, or parts of feature ...\n037 clip_lidar_to_polygon: Clips a LiDAR point cloud to a vector polygon ...\n038 clip_raster_to_polygon: Clips a raster to a vector polygon. ...\n039 closing: A closing is a mathematical morphology operat ...\n040 clump: Groups cells that form discrete areas, assign ...\n041 compactness_ratio: Calculates the compactness ratio (A/P), a mea ...\n042 conservative_smoothing_filter: Performs a conservative-smoothing filter on a ...\n043 construct_vector_tin: Creates a vector triangular irregular network ...\n044 convert_nodata_to_zero: Converts nodata values in a raster to zero. ...\n045 convert_raster_format: Converts raster data from one format to anoth ...\n046 corner_detection: Identifies corner patterns in boolean images ...\n047 correct_vignetting: Corrects the darkening of images towards corn ...\n048 cos: Returns the cosine (cos) of each values in a ...\n049 cosh: Returns the hyperbolic cosine (cosh) of each ...\n050 cost_allocation: Identifies the source cell to which each grid ...\n051 cost_distance: Performs cost-distance accumulation on a cost ...\n052 cost_pathway: Performs cost-distance pathway analysis using ...\n053 count_if: Counts the number of occurrences of a specifi ...\n054 create_colour_composite: Creates a colour-composite image from three b ...\n055 create_hexagonal_vector_grid: Creates a hexagonal vector grid. ...\n056 create_plane: Creates a raster image based on the equation ...\n057 create_rectangular_vector_grid: Creates a rectangular vector grid. ...\n058 crispness_index: Calculates the Crispness Index, which is used ...\n059 cross_tabulation: Performs a cross-tabulation on two categorica ...\n060 csv_points_to_vector: Converts a CSV text file to vector points. ...\n061 cumulative_distribution: Converts a raster image to its cumulative dis ...\n062 d8_flow_accumulation: Calculates a D8 flow accumulation raster from ...\n063 d8_mass_flux: Performs a D8 mass flux calculation. ...\n064 d8_pointer: Calculates a D8 flow pointer raster from an i ...\n065 d_inf_flow_accumulation: Calculates a D-infinity flow accumulation ras ...\n066 d_inf_mass_flux: Performs a D-infinity mass flux calculation. ...\n067 d_inf_pointer: Calculates a D-infinity flow pointer (flow di ...\n068 decrement: Decreases the values of each grid cell in an ...\n069 depth_in_sink: Measures the depth of sinks (depressions) in ...\n070 dev_from_mean_elev: Calculates deviation from mean elevation. ...\n071 diff_from_mean_elev: Calculates difference from mean elevation (eq ...\n072 diff_of_gaussian_filter: Performs a Difference of Gaussian (DoG) filte ...\n073 difference: Outputs the features that occur in one of the ...\n074 direct_decorrelation_stretch: Performs a direct decorrelation stretch enhan ...\n075 directional_relief: Calculates relief for cells in an input DEM f ...\n076 dissolve: Removes the interior, or shared, boundaries w ...\n077 distance_to_outlet: Calculates the distance of stream grid cells ...\n078 diversity_filter: Assigns each cell in the output grid the numb ...\n079 divide: Performs a division operation on two rasters ...\n080 downslope_distance_to_stream: Measures distance to the nearest downslope st ...\n081 downslope_flowpath_length: Calculates the downslope flowpath length from ...\n082 downslope_index: Calculates the Hjerdt et al. (2004) downslope ...\n083 edge_density: Calculates the density of edges, or breaks-in ...\n084 edge_preserving_mean_filter: Performs a simple edge-preserving mean filter ...\n085 edge_proportion: Calculate the proportion of cells in a raster ...\n086 elev_above_pit: Calculate the elevation of each grid cell abo ...\n087 elev_percentile: Calculates the elevation percentile raster fr ...\n088 elev_relative_to_min_max: Calculates the elevation of a location relati ...\n089 elev_relative_to_watershed_min_max: Calculates the elevation of a location relati ...\n090 elevation_above_stream: Calculates the elevation of cells above the n ...\n091 elevation_above_stream_euclidean: Calculates the elevation of cells above the n ...\n092 eliminate_coincident_points: Removes any coincident, or nearly coincident, ...\n093 elongation_ratio: Calculates the elongation ratio for vector po ...\n094 emboss_filter: Performs an emboss filter on an image, simila ...\n095 equal_to: Performs a equal-to comparison operation on t ...\n096 erase: Removes all the features, or parts of feature ...\n097 erase_polygon_from_lidar: Erases (cuts out) a vector polygon or polygon ...\n098 erase_polygon_from_raster: Erases (cuts out) a vector polygon from a ras ...\n099 euclidean_allocation: Assigns grid cells in the output raster the v ...\n100 euclidean_distance: Calculates the Shih and Wu (2004) Euclidean d ...\n101 exp: Returns the exponential (base e) of values in ...\n102 exp2: Returns the exponential (base 2) of values in ...\n103 export_table_to_csv: Exports an attribute table to a CSV text file ...\n104 extend_vector_lines: Extends vector lines by a specified distance. ...\n105 extract_nodes: Converts vector lines or polygons into vertex ...\n106 extract_raster_values_at_points: Extracts the values of raster(s) at vector po ...\n107 extract_streams: Extracts stream grid cells from a flow accumu ...\n108 extract_valleys: Identifies potential valley bottom grid cells ...\n109 fd8_flow_accumulation: Calculates an FD8 flow accumulation raster fr ...\n110 fd8_pointer: Calculates an FD8 flow pointer raster from an ...\n111 farthest_channel_head: Calculates the distance to the furthest upstr ...\n112 fast_almost_gaussian_filter: Performs a fast approximate Gaussian filter o ...\n113 feature_preserving_smoothing: Reduces short-scale variation in an input DEM ...\n114 fetch_analysis: Performs an analysis of fetch or upwind dista ...\n115 fill_burn: Burns streams into a DEM using the FillBurn ( ...\n116 fill_depressions: Fills all of the depressions in a DEM. Depres ...\n117 fill_missing_data: Fills NoData holes in a DEM. ...\n118 fill_single_cell_pits: Raises pit cells to the elevation of their lo ...\n119 filter_lidar_classes: Removes points in a LAS file with certain spe ...\n120 filter_lidar_scan_angles: Removes points in a LAS file with scan angles ...\n121 find_flightline_edge_points: Identifies points along a flightline's edge i ...\n122 find_lowest_or_highest_points: Locates the lowest and/or highest valued cell ...\n123 find_main_stem: Finds the main stem, based on stream lengths, ...\n124 find_no_flow_cells: Finds grid cells with no downslope neighbours ...\n125 find_parallel_flow: Finds areas of parallel flow in D8 flow direc ...\n126 find_patch_or_class_edge_cells: Finds all cells located on the edge of patch ...\n127 find_ridges: Identifies potential ridge and peak grid cell ...\n128 flatten_lakes: Flattens lake polygons in a raster DEM. ...\n129 flightline_overlap: Reads a LiDAR (LAS) point file and outputs a ...\n130 flip_image: Reflects an image in the vertical or horizont ...\n131 flood_order: Assigns each DEM grid cell its order in the s ...\n132 floor: Returns the largest (closest to positive infi ...\n133 flow_accumulation_full_workflow: Resolves all of the depressions in a DEM, out ...\n134 flow_length_diff: Calculates the local maximum absolute differe ...\n135 gamma_correction: Performs a gamma correction on an input image ...\n136 gaussian_contrast_stretch: Performs a Gaussian contrast stretch on input ...\n137 gaussian_filter: Performs a Gaussian filter on an image. ...\n138 greater_than: Performs a greater-than comparison operation ...\n139 hack_stream_order: Assigns the Hack stream order to each tributa ...\n140 high_pass_filter: Performs a high-pass filter on an input image ...\n141 high_pass_median_filter: Performs a high pass median filter on an inpu ...\n142 highest_position: Identifies the stack position of the maximum ...\n143 hillshade: Calculates a hillshade raster from an input D ...\n144 hillslopes: Identifies the individual hillslopes draining ...\n145 histogram_equalization: Performs a histogram equalization contrast en ...\n146 histogram_matching: Alters the statistical distribution of a rast ...\n147 histogram_matching_two_images: This tool alters the cumulative distribution ...\n148 hole_proportion: Calculates the proportion of the total area o ...\n149 horizon_angle: Calculates horizon angle (maximum upwind slop ...\n150 horton_stream_order: Assigns the Horton stream order to each tribu ...\n151 hypsometric_analysis: Calculates a hypsometric curve for one or mor ...\n152 idw_interpolation: Interpolates vector points into a raster surf ...\n153 ihs_to_rgb: Converts intensity, hue, and saturation (IHS) ...\n154 image_autocorrelation: Performs Moran's I analysis on two or more in ...\n155 image_correlation: Performs image correlation on two or more inp ...\n156 image_regression: Performs image regression analysis on two inp ...\n157 image_stack_profile: Plots an image stack profile (i.e. signature) ...\n158 impoundment_size_index: Calculates the impoundment size resulting fro ...\n159 in_place_add: Performs an in-place addition operation (inpu ...\n160 in_place_divide: Performs an in-place division operation (inpu ...\n161 in_place_multiply: Performs an in-place multiplication operation ...\n162 in_place_subtract: Performs an in-place subtraction operation (i ...\n163 increment: Increases the values of each grid cell in an ...\n164 integer_division: Performs an integer division operation on two ...\n165 integral_image: Transforms an input image (summed area table) ...\n166 intersect: Identifies the parts of features in common be ...\n167 is_no_data: Identifies NoData valued pixels in an image. ...\n168 isobasins: Divides a landscape into nearly equal sized d ...\n169 jenson_snap_pour_points: Moves outlet points used to specify points of ...\n170 join_tables: Merge a vector's attribute table with another ...\n171 k_means_clustering: Performs a k-means clustering operation on a ...\n172 k_nearest_mean_filter: A k-nearest mean filter is a type of edge-pre ...\n173 ks_test_for_normality: Evaluates whether the values in a raster are ...\n174 kappa_index: Performs a kappa index of agreement (KIA) ana ...\n175 laplacian_filter: Performs a Laplacian filter on an image. ...\n176 laplacian_of_gaussian_filter: Performs a Laplacian-of-Gaussian (LoG) filter ...\n177 las_to_ascii: Converts one or more LAS files into ASCII tex ...\n178 las_to_multipoint_shapefile: Converts one or more LAS files into Multipoin ...\n179 las_to_shapefile: Converts one or more LAS files into a vector ...\n180 layer_footprint: Creates a vector polygon footprint of the are ...\n181 lee_filter: Performs a Lee (Sigma) smoothing filter on an ...\n182 length_of_upstream_channels: Calculates the total length of channels upstr ...\n183 less_than: Performs a less-than comparison operation on ...\n184 lidar_block_maximum: Creates a block-maximum raster from an input ...\n185 lidar_block_minimum: Creates a block-minimum raster from an input ...\n186 lidar_classify_subset: Classifies the values in one LiDAR point clou ...\n187 lidar_colourize: Adds the red-green-blue colour fields of a Li ...\n188 lidar_construct_vector_tin: Creates a vector triangular irregular network ...\n189 lidar_elevation_slice: Outputs all of the points within a LiDAR (LAS ...\n190 lidar_ground_point_filter: Identifies ground points within LiDAR dataset ...\n191 lidar_hex_binning: Hex-bins a set of LiDAR points. ...\n192 lidar_hillshade: Calculates a hillshade value for points withi ...\n193 lidar_histogram: Creates a histogram of LiDAR data. ...\n194 lidar_idw_interpolation: Interpolates LAS files using an inverse-dista ...\n195 lidar_info: Prints information about a LiDAR (LAS) datase ...\n196 lidar_join: Joins multiple LiDAR (LAS) files into a singl ...\n197 lidar_kappa_index: Performs a kappa index of agreement (KIA) ana ...\n198 lidar_nearest_neighbour_gridding: Grids LAS files using nearest-neighbour schem ...\n199 lidar_point_density: Calculates the spatial pattern of point densi ...\n200 lidar_point_stats: Creates several rasters summarizing the distr ...\n201 lidar_ransac_planes: Removes outliers (high and low points) in a L ...\n202 lidar_remove_duplicates: Removes duplicate points from a LiDAR data se ...\n203 lidar_remove_outliers: Removes outliers (high and low points) in a L ...\n204 lidar_segmentation: Segments a LiDAR point cloud based on normal ...\n205 lidar_segmentation_based_filter: Identifies ground points within LiDAR point c ...\n206 lidar_tin_gridding: Creates a raster grid based on a Delaunay tri ...\n207 lidar_thin: Thins a LiDAR point cloud, reducing point den ...\n208 lidar_thin_high_density: Thins points from high density areas within a ...\n209 lidar_tile: Tiles a LiDAR LAS file into multiple LAS file ...\n210 lidar_tile_footprint: Creates a vector polygon of the convex hull o ...\n211 lidar_tophat_transform: Performs a white top-hat transform on a Lidar ...\n212 line_detection_filter: Performs a line-detection filter on an image. ...\n213 line_intersections: Identifies points where the features of two v ...\n214 line_thinning: Performs line thinning a on Boolean raster im ...\n215 linearity_index: Calculates the linearity index for vector pol ...\n216 lines_to_polygons: Converts vector polylines to polygons. ...\n217 list_unique_values: Lists the unique values contained in a field ...\n218 ln: Returns the natural logarithm of values in a ...\n219 log10: Returns the base-10 logarithm of values in a ...\n220 log2: Returns the base-2 logarithm of values in a r ...\n221 long_profile: Plots the stream longitudinal profiles for on ...\n222 long_profile_from_points: Plots the longitudinal profiles from flow-pat ...\n223 longest_flowpath: Delineates the longest flowpaths for a group ...\n224 lowest_position: Identifies the stack position of the minimum ...\n225 majority_filter: Assigns each cell in the output grid the most ...\n226 max: Performs a MAX operation on two rasters or a ...\n227 max_absolute_overlay: Evaluates the maximum absolute value for each ...\n228 max_anisotropy_dev: Calculates the maximum anisotropy (directiona ...\n229 max_anisotropy_dev_signature: Calculates the anisotropy in deviation from m ...\n230 max_branch_length: Lindsay and Seibert's (2013) branch length in ...\n231 max_difference_from_mean: Calculates the maximum difference from mean e ...\n232 max_downslope_elev_change: Calculates the maximum downslope change in el ...\n233 max_elev_dev_signature: Calculates the maximum elevation deviation ov ...\n234 max_elevation_deviation: Calculates the maximum elevation deviation ov ...\n235 max_overlay: Evaluates the maximum value for each grid cel ...\n236 max_upslope_flowpath_length: Measures the maximum length of all upslope fl ...\n237 maximum_filter: Assigns each cell in the output grid the maxi ...\n238 mean_filter: Performs a mean filter (low-pass filter) on a ...\n239 median_filter: Performs a median filter on an input image. ...\n240 medoid: Calculates the medoid for a series of vector ...\n241 merge_line_segments: Merges vector line segments into larger featu ...\n242 merge_table_with_csv: Merge a vector's attribute table with a table ...\n243 merge_vectors: Combines two or more input vectors of the sam ...\n244 min: Performs a MIN operation on two rasters or a ...\n245 min_absolute_overlay: Evaluates the minimum absolute value for each ...\n246 min_downslope_elev_change: Calculates the minimum downslope change in el ...\n247 min_max_contrast_stretch: Performs a min-max contrast stretch on an inp ...\n248 min_overlay: Evaluates the minimum value for each grid cel ...\n249 minimum_bounding_box: Creates a vector minimum bounding rectangle a ...\n250 minimum_bounding_circle: Delineates the minimum bounding circle (i.e. ...\n251 minimum_bounding_envelope: Creates a vector axis-aligned minimum boundin ...\n252 minimum_convex_hull: Creates a vector convex polygon around vector ...\n253 minimum_filter: Assigns each cell in the output grid the mini ...\n254 modified_k_means_clustering: Performs a modified k-means clustering operat ...\n255 modify_no_data_value: Converts nodata values in a raster to zero. ...\n256 modulo: Performs a modulo operation on two rasters or ...\n257 mosaic: Mosaics two or more images together. ...\n258 mosaic_with_feathering: Mosaics two images together using a featherin ...\n259 multi_part_to_single_part: Converts a vector file containing multi-part ...\n260 multiply: Performs a multiplication operation on two ra ...\n261 multiscale_roughness: Calculates surface roughness over a range of ...\n262 multiscale_roughness_signature: Calculates the surface roughness for points o ...\n263 multiscale_std_dev_normals: Calculates surface roughness over a range of ...\n264 multiscale_std_dev_normals_signature: Calculates the surface roughness for points o ...\n265 multiscale_topographic_position_image: Creates a multiscale topographic position ima ...\n266 narrowness_index: Calculates the narrowness of raster polygons. ...\n267 nearest_neighbour_gridding: Creates a raster grid based on a set of vecto ...\n268 negate: Changes the sign of values in a raster or the ...\n269 new_raster_from_base: Creates a new raster using a base image. ...\n270 normal_vectors: Calculates normal vectors for points within a ...\n271 normalized_difference_index: Calculate a normalized-difference index (NDI) ...\n272 not: Performs a logical NOT operator on two Boolea ...\n273 not_equal_to: Performs a not-equal-to comparison operation ...\n274 num_downslope_neighbours: Calculates the number of downslope neighbours ...\n275 num_inflowing_neighbours: Computes the number of inflowing neighbours t ...\n276 num_upslope_neighbours: Calculates the number of upslope neighbours t ...\n277 olympic_filter: Performs an olympic smoothing filter on an im ...\n278 opening: An opening is a mathematical morphology opera ...\n279 or: Performs a logical OR operator on two Boolean ...\n280 panchromatic_sharpening: Increases the spatial resolution of image dat ...\n281 patch_orientation: Calculates the orientation of vector polygons ...\n282 pennock_landform_class: Classifies hillslope zones based on slope, pr ...\n283 percent_elev_range: Calculates percent of elevation range from a ...\n284 percent_equal_to: Calculates the percentage of a raster stack t ...\n285 percent_greater_than: Calculates the percentage of a raster stack t ...\n286 percent_less_than: Calculates the percentage of a raster stack t ...\n287 percentage_contrast_stretch: Performs a percentage linear contrast stretch ...\n288 percentile_filter: Performs a percentile filter on an input imag ...\n289 perimeter_area_ratio: Calculates the perimeter-area ratio of vector ...\n290 pick_from_list: Outputs the value from a raster stack specifi ...\n291 plan_curvature: Calculates a plan (contour) curvature raster ...\n292 polygon_area: Calculates the area of vector polygons. ...\n293 polygon_long_axis: This tool can be used to map the long axis of ...\n294 polygon_perimeter: Calculates the perimeter of vector polygons. ...\n295 polygon_short_axis: This tool can be used to map the short axis o ...\n296 polygonize: Creates a polygon layer from two or more inte ...\n297 polygons_to_lines: Converts vector polygons to polylines. ...\n298 power: Raises the values in grid cells of one raster ...\n299 prewitt_filter: Performs a Prewitt edge-detection filter on a ...\n300 principal_component_analysis: Performs a principal component analysis (PCA) ...\n301 print_geo_tiff_tags: Prints the tags within a GeoTIFF. ...\n302 profile: Plots profiles from digital surface models. ...\n303 profile_curvature: Calculates a profile curvature raster from an ...\n304 quantiles: Transforms raster values into quantiles. ...\n305 radius_of_gyration: Calculates the distance of cells from their p ...\n306 raise_walls: Raises walls in a DEM along a line or around ...\n307 random_field: Creates an image containing random values. ...\n308 random_sample: Creates an image containing randomly located ...\n309 range_filter: Assigns each cell in the output grid the rang ...\n310 raster_area: Calculates the area of polygons or classes wi ...\n311 raster_cell_assignment: Assign row or column number to cells. ...\n312 raster_histogram: Creates a histogram from raster values. ...\n313 raster_streams_to_vector: Converts a raster stream file into a vector f ...\n314 raster_summary_stats: Measures a rasters min, max, average, standar ...\n315 raster_to_vector_lines: Converts a raster lines features into a vecto ...\n316 raster_to_vector_points: Converts a raster dataset to a vector of the ...\n317 rasterize_streams: Rasterizes vector streams based on Lindsay (2 ...\n318 reciprocal: Returns the reciprocal (i.e. 1 / z) of values ...\n319 reclass: Reclassifies the values in a raster image. ...\n320 reclass_equal_interval: Reclassifies the values in a raster image bas ...\n321 reclass_from_file: Reclassifies the values in a raster image usi ...\n322 reinitialize_attribute_table: Reinitializes a vector's attribute table dele ...\n323 related_circumscribing_circle: Calculates the related circumscribing circle ...\n324 relative_aspect: Calculates relative aspect (relative to a use ...\n325 relative_stream_power_index: Calculates the relative stream power index. ...\n326 relative_topographic_position: Calculates the relative topographic position ...\n327 remove_off_terrain_objects: Removes off-terrain objects from a raster dig ...\n328 remove_polygon_holes: Removes holes within the features of a vector ...\n329 remove_short_streams: Removes short first-order streams from a stre ...\n330 remove_spurs: Removes the spurs (pruning operation) from a ...\n331 resample: Resamples one or more input images into a des ...\n332 rescale_value_range: Performs a min-max contrast stretch on an inp ...\n333 rgb_to_ihs: Converts red, green, and blue (RGB) images in ...\n334 rho8_pointer: Calculates a stochastic Rho8 flow pointer ras ...\n335 roberts_cross_filter: Performs a Robert's cross edge-detection filt ...\n336 root_mean_square_error: Calculates the RMSE and other accuracy statis ...\n337 round: Rounds the values in an input raster to the n ...\n338 ruggedness_index: Calculates the Riley et al.'s (1999) terrain ...\n339 scharr_filter: Performs a Scharr edge-detection filter on an ...\n340 sediment_transport_index: Calculates the sediment transport index. ...\n341 select_tiles_by_polygon: Copies LiDAR tiles overlapping with a polygon ...\n342 set_nodata_value: Assign a specified value in an input image to ...\n343 shape_complexity_index: Calculates overall polygon shape complexity o ...\n344 shape_complexity_index_raster: Calculates the complexity of raster polygons ...\n345 shreve_stream_magnitude: Assigns the Shreve stream magnitude to each l ...\n346 sigmoidal_contrast_stretch: Performs a sigmoidal contrast stretch on inpu ...\n347 sin: Returns the sine (sin) of each values in a ra ...\n348 single_part_to_multi_part: Converts a vector file containing multi-part ...\n349 sinh: Returns the hyperbolic sine (sinh) of each va ...\n350 sink: Identifies the depressions in a DEM, giving e ...\n351 slope: Calculates a slope raster from an input DEM. ...\n352 slope_vs_elevation_plot: Creates a slope vs. elevation plot for one or ...\n353 smooth_vectors: Smooths a vector coverage of either a POLYLIN ...\n354 snap_pour_points: Moves outlet points used to specify points of ...\n355 sobel_filter: Performs a Sobel edge-detection filter on an ...\n356 spherical_std_dev_of_normals: Calculates the spherical standard deviation o ...\n357 split_colour_composite: This tool splits an RGB colour composite imag ...\n358 split_with_lines: Splits the lines or polygons in one layer usi ...\n359 square: Squares the values in a raster. ...\n360 square_root: Returns the square root of the values in a ra ...\n361 standard_deviation_contrast_stretch: Performs a standard-deviation contrast stretc ...\n362 standard_deviation_filter: Assigns each cell in the output grid the stan ...\n363 standard_deviation_of_slope: Calculates the standard deviation of slope fr ...\n364 stochastic_depression_analysis: Preforms a stochastic analysis of depressions ...\n365 strahler_order_basins: Identifies Strahler-order basins from an inpu ...\n366 strahler_stream_order: Assigns the Strahler stream order to each lin ...\n367 stream_link_class: Identifies the exterior/interior links and no ...\n368 stream_link_identifier: Assigns a unique identifier to each link in a ...\n369 stream_link_length: Estimates the length of each link (or tributa ...\n370 stream_link_slope: Estimates the average slope of each link (or ...\n371 stream_slope_continuous: Estimates the slope of each grid cell in a st ...\n372 subbasins: Identifies the catchments, or sub-basin, drai ...\n373 subtract: Performs a differencing operation on two rast ...\n374 sum_overlay: Calculates the sum for each grid cell from a ...\n375 surface_area_ratio: Calculates a the surface area ratio of each g ...\n376 symmetrical_difference: Outputs the features that occur in one of the ...\n377 tin_gridding: Creates a raster grid based on a triangular i ...\n378 tan: Returns the tangent (tan) of each values in a ...\n379 tangential_curvature: Calculates a tangential curvature raster from ...\n380 tanh: Returns the hyperbolic tangent (tanh) of each ...\n381 thicken_raster_line: Thickens single-cell wide lines within a rast ...\n382 to_degrees: Converts a raster from radians to degrees. ...\n383 to_radians: Converts a raster from degrees to radians. ...\n384 tophat_transform: Performs either a white or black top-hat tran ...\n385 topological_stream_order: Assigns each link in a stream network its top ...\n386 total_curvature: Calculates a total curvature raster from an i ...\n387 total_filter: Performs a total filter on an input image. ...\n388 trace_downslope_flowpaths: Traces downslope flowpaths from one or more t ...\n389 trend_surface: Estimates the trend surface of an input raste ...\n390 trend_surface_vector_points: Estimates a trend surface from vector points. ...\n391 tributary_identifier: Assigns a unique identifier to each tributary ...\n392 truncate: Truncates the values in a raster to the desir ...\n393 turning_bands_simulation: Creates an image containing random values bas ...\n394 union: Splits vector layers at their overlaps, creat ...\n395 unnest_basins: Extract whole watersheds for a set of outlet ...\n396 unsharp_masking: An image sharpening technique that enhances e ...\n397 user_defined_weights_filter: Performs a user-defined weights filter on an ...\n398 vector_hex_binning: Hex-bins a set of vector points. ...\n399 vector_lines_to_raster: Converts a vector containing polylines into a ...\n400 vector_points_to_raster: Converts a vector containing points into a ra ...\n401 vector_polygons_to_raster: Converts a vector containing polygons into a ...\n402 viewshed: Identifies the viewshed for a point or set of ...\n403 visibility_index: Estimates the relative visibility of sites in ...\n404 voronoi_diagram: Creates a vector Voronoi diagram for a set of ...\n405 watershed: Identifies the watershed, or drainage basin, ...\n406 weighted_overlay: Performs a weighted sum on multiple input ras ...\n407 weighted_sum: Performs a weighted-sum overlay on multiple i ...\n408 wetness_index: Calculates the topographic wetness index, Ln( ...\n409 write_function_memory_insertion: Performs a write function memory insertion fo ...\n410 xor: Performs a logical XOR operator on two Boolea ...\n411 z_scores: Standardizes the values in an input raster by ...\n412 zonal_statistics: Extracts descriptive statistics for a group o ...\n"
]
],
[
[
"## Getting data",
"_____no_output_____"
],
[
"This section demonstrates two ways to get data into Binder so that you can test **whitebox** on the cloud using your own data. \n\n* [Getting data from direct URLs](#Getting-data-from-direct-URLs) \n* [Getting data from Google Drive](#Getting-data-from-Google-Drive)",
"_____no_output_____"
],
[
"### Getting data from direct URLs\n\nIf you have data hosted on your own HTTP server or GitHub, you should be able to get direct URLs. With a direct URL, users can automatically download the data when the URL is clicked. For example https://github.com/giswqs/whitebox/raw/master/examples/testdata.zip",
"_____no_output_____"
],
[
"Import the following Python libraries and start getting data from direct URLs.",
"_____no_output_____"
]
],
[
[
"import os\nimport zipfile\nimport tarfile\nimport shutil\nimport urllib.request",
"_____no_output_____"
]
],
[
[
"Create a folder named *whitebox* under the user home folder and set it as the working directory.",
"_____no_output_____"
]
],
[
[
"work_dir = os.path.join(os.path.expanduser(\"~\"), 'whitebox')\nif not os.path.exists(work_dir):\n os.mkdir(work_dir)\nos.chdir(work_dir)\nprint(\"Working directory: {}\".format(work_dir))",
"Working directory: /home/qiusheng/whitebox\n"
]
],
[
[
"Replace the following URL with your own direct URL hosting your data.",
"_____no_output_____"
]
],
[
[
"url = \"https://github.com/giswqs/whitebox/raw/master/examples/testdata.zip\"",
"_____no_output_____"
]
],
[
[
"Download data the from the above URL and unzip the file if needed.",
"_____no_output_____"
]
],
[
[
"# download the file \nzip_name = os.path.basename(url)\nzip_path = os.path.join(work_dir, zip_name) \n\nprint('Downloading {} ...'.format(zip_name))\nurllib.request.urlretrieve(url, zip_path) \nprint('Downloading done.'.format(zip_name))\n\n# if it is a zip file\nif '.zip' in zip_name: \n print(\"Decompressing {} ...\".format(zip_name))\n with zipfile.ZipFile(zip_name, \"r\") as zip_ref:\n zip_ref.extractall(work_dir)\n print('Decompressing done.')\n\n# if it is a tar file\nif '.tar' in zip_name: \n print(\"Decompressing {} ...\".format(zip_name))\n with tarfile.open(zip_name, \"r\") as tar_ref:\n tar_ref.extractall(work_dir)\n print('Decompressing done.')\n \nprint('Data directory: {}'.format(os.path.splitext(zip_path)[0]))",
"Downloading testdata.zip ...\nDownloading done.\nDecompressing testdata.zip ...\nDecompressing done.\nData directory: /home/qiusheng/whitebox/testdata\n"
]
],
[
[
"You have successfully downloaded data to Binder. Therefore, you can skip to [Using whitebox](#Using-whitebox) and start testing whitebox with your own data. ",
"_____no_output_____"
],
[
"### Getting data from Google Drive\n\nAlternatively, you can upload data to [Google Drive](https://www.google.com/drive/) and then [share files publicly from Google Drive](https://support.google.com/drive/answer/2494822?co=GENIE.Platform%3DDesktop&hl=en). Once the file is shared publicly, you should be able to get a shareable URL. For example, https://drive.google.com/file/d/1xgxMLRh_jOLRNq-f3T_LXAaSuv9g_JnV.\n \nTo download files from Google Drive to Binder, you can use the Python package called [google-drive-downloader](https://github.com/ndrplz/google-drive-downloader), which can be installed using the following command:\n\n`pip install googledrivedownloader requests`",
"_____no_output_____"
],
[
"**Replace the following URL with your own shareable URL from Google Drive.**",
"_____no_output_____"
]
],
[
[
"gfile_url = 'https://drive.google.com/file/d/1xgxMLRh_jOLRNq-f3T_LXAaSuv9g_JnV'",
"_____no_output_____"
]
],
[
[
"**Extract the file id from the above URL.**",
"_____no_output_____"
]
],
[
[
"file_id = gfile_url.split('/')[5] #'1xgxMLRh_jOLRNq-f3T_LXAaSuv9g_JnV'\nprint('Google Drive file id: {}'.format(file_id))",
"Google Drive file id: 1xgxMLRh_jOLRNq-f3T_LXAaSuv9g_JnV\n"
]
],
[
[
"**Download the shared file from Google Drive.**",
"_____no_output_____"
]
],
[
[
"from google_drive_downloader import GoogleDriveDownloader as gdd\ndest_path = './testdata.zip' # choose a name for the downloaded file\ngdd.download_file_from_google_drive(file_id, dest_path, unzip=True)",
"_____no_output_____"
]
],
[
[
"You have successfully downloaded data from Google Drive to Binder. You can now continue to [Using whitebox](#Using-whitebox) and start testing whitebox with your own data. ",
"_____no_output_____"
],
[
"## Using whitebox",
"_____no_output_____"
],
[
"Here you can specify where your data are located. In this example, we will use [DEM.tif](https://github.com/giswqs/whitebox/blob/master/examples/testdata/DEM.tif), which has been downloaded to the testdata folder.",
"_____no_output_____"
],
[
"**List data under the data folder.**",
"_____no_output_____"
]
],
[
[
"data_dir = './testdata/'\nprint(os.listdir(data_dir))",
"['breached_sink.tif', 'DEM.dep', 'smoothed.tif', 'DEM.tif.aux.xml', 'DEM.tif', 'breached_sink.tif.aux.xml']\n"
]
],
[
[
"In this simple example, we smooth [DEM.tif](https://github.com/giswqs/whitebox/blob/master/examples/testdata/DEM.tif) using a [feature preserving denoising](https://github.com/jblindsay/whitebox-tools/blob/master/src/tools/terrain_analysis/feature_preserving_denoise.rs) algorithm. Then, we fill depressions in the DEM using a [depression breaching](https://github.com/jblindsay/whitebox-tools/blob/master/src/tools/hydro_analysis/breach_depressions.rs) algorithm. Finally, we calculate [flow accumulation](https://github.com/jblindsay/whitebox-tools/blob/master/src/tools/hydro_analysis/dinf_flow_accum.rs) based on the depressionless DEM.",
"_____no_output_____"
]
],
[
[
"import whitebox\nwbt = whitebox.WhiteboxTools()\n# set whitebox working directory\nwbt.set_working_dir(data_dir)\nwbt.verbose = False\n\n# call whiteboxtool\nwbt.feature_preserving_smoothing(\"DEM.tif\", \"smoothed.tif\", filter=9)\nwbt.breach_depressions(\"smoothed.tif\", \"breached.tif\")\nwbt.d_inf_flow_accumulation(\"breached.tif\", \"flow_accum.tif\")",
"_____no_output_____"
]
],
[
[
"## Displaying results\n\nThis section demonstrates how to display images on Jupyter Notebook. Three Python packages are used here, including [matplotlib](https://matplotlib.org/), [imageio](https://imageio.readthedocs.io/en/stable/installation.html), and [tifffile](https://pypi.org/project/tifffile/). These three packages can be installed using the following command:\n\n`pip install matplotlib imageio tifffile`\n",
"_____no_output_____"
],
[
"**Import the libraries.**",
"_____no_output_____"
]
],
[
[
"# comment out the third line (%matplotlib inline) if you run the tutorial in other IDEs other than Jupyter Notebook\nimport matplotlib.pyplot as plt\nimport imageio\n%matplotlib inline ",
"_____no_output_____"
]
],
[
[
"**Display one single image.**",
"_____no_output_____"
]
],
[
[
"raster = imageio.imread(os.path.join(data_dir, 'DEM.tif'))\nplt.imshow(raster)\nplt.show()",
"_____no_output_____"
]
],
[
[
"**Read images as numpy arrays.**",
"_____no_output_____"
]
],
[
[
"original = imageio.imread(os.path.join(data_dir, 'DEM.tif'))\nsmoothed = imageio.imread(os.path.join(data_dir, 'smoothed.tif'))\nbreached = imageio.imread(os.path.join(data_dir, 'breached.tif'))\nflow_accum = imageio.imread(os.path.join(data_dir, 'flow_accum.tif'))",
"_____no_output_____"
]
],
[
[
"**Display multiple images in one plot.**",
"_____no_output_____"
]
],
[
[
"fig=plt.figure(figsize=(16,11))\n\nax1 = fig.add_subplot(2, 2, 1)\nax1.set_title('Original DEM')\nplt.imshow(original)\n\nax2 = fig.add_subplot(2, 2, 2)\nax2.set_title('Smoothed DEM')\nplt.imshow(smoothed)\n\nax3 = fig.add_subplot(2, 2, 3)\nax3.set_title('Breached DEM')\nplt.imshow(breached)\n\nax4 = fig.add_subplot(2, 2, 4)\nax4.set_title('Flow Accumulation')\nplt.imshow(flow_accum)\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"## whitebox GUI\n\nWhiteboxTools also provides a Graphical User Interface (GUI) - **WhiteboxTools Runner**, which can be invoked using the following Python script. *__Note that the GUI might not work in Jupyter notebooks deployed on the cloud (e.g., MyBinder.org), but it should work on Jupyter notebooks on local computers.__*\n\n```python\nimport whitebox\nwhitebox.Runner()\n\n```\n\n",
"_____no_output_____"
],
[
"## Citing whitebox\n\nIf you use the **whitebox** Python package for your research and publications, please consider citing the following papers to give Prof. [John Lindsay](http://www.uoguelph.ca/~hydrogeo/index.html) credits for his tremendous efforts in developing [Whitebox GAT](https://github.com/jblindsay/whitebox-geospatial-analysis-tools) and [WhiteboxTools](https://github.com/jblindsay/whitebox-tools). Without his work, this **whitebox** Python package would not exist! \n\n* Lindsay, J. B. (2016). Whitebox GAT: A case study in geomorphometric analysis. Computers & Geosciences, 95, 75-84. http://dx.doi.org/10.1016/j.cageo.2016.07.003",
"_____no_output_____"
],
[
"## Credits\n\nThis interactive notebook is made possible by [MyBinder.org](https://mybinder.org/). Big thanks to [MyBinder.org](https://mybinder.org/) for developing the amazing binder platform, which is extremely valuable for reproducible research!\n\nThis tutorial made use a number of open-source Python packages, including [ Cookiecutter](https://github.com/audreyr/cookiecutter), [numpy](http://www.numpy.org/), [matplotlib](https://matplotlib.org/), [imageio](https://imageio.readthedocs.io/en/stable/installation.html), [tifffile](https://pypi.org/project/tifffile/), and [google-drive-downloader](https://github.com/ndrplz/google-drive-downloader). Thanks to all developers of these wonderful Python packages!\n",
"_____no_output_____"
],
[
"## Contact\n\nIf you have any questions regarding this tutorial or the **whitebox** Python package, you can contact me (Dr. Qiusheng Wu) at [email protected] or https://wetlands.io/#contact",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0c28c3dc3879b21cdc13d0ff0df7b06eab2d977 | 24,461 | ipynb | Jupyter Notebook | Dictionaries.ipynb | niksom406/IBM-Python-for-Data-Science | 2def1af9233c687cfbc5221671d8cf737e51eb79 | [
"MIT"
] | null | null | null | Dictionaries.ipynb | niksom406/IBM-Python-for-Data-Science | 2def1af9233c687cfbc5221671d8cf737e51eb79 | [
"MIT"
] | null | null | null | Dictionaries.ipynb | niksom406/IBM-Python-for-Data-Science | 2def1af9233c687cfbc5221671d8cf737e51eb79 | [
"MIT"
] | null | null | null | 22.670065 | 716 | 0.51298 | [
[
[
"<a href=\"https://cognitiveclass.ai/\">\n <img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/CCLog.png\" width=\"200\" align=\"center\">\n</a>",
"_____no_output_____"
],
[
"<h1>Dictionaries in Python</h1>",
"_____no_output_____"
],
[
"<p><strong>Welcome!</strong> This notebook will teach you about the dictionaries in the Python Programming Language. By the end of this lab, you'll know the basics dictionary operations in Python, including what it is, and the operations on it.</p>",
"_____no_output_____"
],
[
"<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n <a href=\"https://cocl.us/NotebooksPython101\">\n <img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/TopAd.png\" width=\"750\" align=\"center\">\n </a>\n</div>",
"_____no_output_____"
],
[
"<h2>Table of Contents</h2>\n<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n <ul>\n <li>\n <a href=\"#dic\">Dictionaries</a>\n <ul>\n <li><a href=\"content\">What are Dictionaries?</a></li>\n <li><a href=\"key\">Keys</a></li>\n </ul>\n </li>\n <li>\n <a href=\"#quiz\">Quiz on Dictionaries</a>\n </li>\n </ul>\n <p>\n Estimated time needed: <strong>20 min</strong>\n </p>\n</div>\n\n<hr>",
"_____no_output_____"
],
[
"<h2 id=\"Dic\">Dictionaries</h2>",
"_____no_output_____"
],
[
"<h3 id=\"content\">What are Dictionaries?</h3>",
"_____no_output_____"
],
[
"A dictionary consists of keys and values. It is helpful to compare a dictionary to a list. Instead of the numerical indexes such as a list, dictionaries have keys. These keys are the keys that are used to access values within a dictionary.",
"_____no_output_____"
],
[
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/DictsList.png\" width=\"650\" />",
"_____no_output_____"
],
[
"An example of a Dictionary <code>Dict</code>:",
"_____no_output_____"
]
],
[
[
"# Create the dictionary\n\nDict = {\"key1\": 1, \"key2\": \"2\", \"key3\": [3, 3, 3], \"key4\": (4, 4, 4), ('key5'): 5, (0, 1): 6}\nDict",
"_____no_output_____"
]
],
[
[
"The keys can be strings:",
"_____no_output_____"
]
],
[
[
"# Access to the value by the key\n\nDict[\"key1\"]",
"_____no_output_____"
]
],
[
[
"Keys can also be any immutable object such as a tuple: ",
"_____no_output_____"
]
],
[
[
"# Access to the value by the key\n\nDict[(0, 1)]",
"_____no_output_____"
]
],
[
[
" Each key is separated from its value by a colon \"<code>:</code>\". Commas separate the items, and the whole dictionary is enclosed in curly braces. An empty dictionary without any items is written with just two curly braces, like this \"<code>{}</code>\".",
"_____no_output_____"
]
],
[
[
"# Create a sample dictionary\n\nrelease_year_dict = {\"Thriller\": \"1982\", \"Back in Black\": \"1980\", \\\n \"The Dark Side of the Moon\": \"1973\", \"The Bodyguard\": \"1992\", \\\n \"Bat Out of Hell\": \"1977\", \"Their Greatest Hits (1971-1975)\": \"1976\", \\\n \"Saturday Night Fever\": \"1977\", \"Rumours\": \"1977\"}\nrelease_year_dict",
"_____no_output_____"
]
],
[
[
"In summary, like a list, a dictionary holds a sequence of elements. Each element is represented by a key and its corresponding value. Dictionaries are created with two curly braces containing keys and values separated by a colon. For every key, there can only be one single value, however, multiple keys can hold the same value. Keys can only be strings, numbers, or tuples, but values can be any data type.",
"_____no_output_____"
],
[
"It is helpful to visualize the dictionary as a table, as in the following image. The first column represents the keys, the second column represents the values.",
"_____no_output_____"
],
[
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/DictsStructure.png\" width=\"650\" />",
"_____no_output_____"
],
[
"<h3 id=\"key\">Keys</h3>",
"_____no_output_____"
],
[
"You can retrieve the values based on the names:",
"_____no_output_____"
]
],
[
[
"# Get value by keys\n\nrelease_year_dict['Thriller'] ",
"_____no_output_____"
]
],
[
[
"This corresponds to: \n",
"_____no_output_____"
],
[
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/DictsKeyOne.png\" width=\"500\" />",
"_____no_output_____"
],
[
"Similarly for <b>The Bodyguard</b>",
"_____no_output_____"
]
],
[
[
"# Get value by key\n\nrelease_year_dict['The Bodyguard'] ",
"_____no_output_____"
]
],
[
[
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/DictsKeyTwo.png\" width=\"500\" />",
"_____no_output_____"
],
[
"Now let you retrieve the keys of the dictionary using the method <code>release_year_dict()</code>:",
"_____no_output_____"
]
],
[
[
"# Get all the keys in dictionary\n\nrelease_year_dict.keys() ",
"_____no_output_____"
]
],
[
[
"You can retrieve the values using the method <code>values()</code>:",
"_____no_output_____"
]
],
[
[
"# Get all the values in dictionary\n \nrelease_year_dict.values() ",
"_____no_output_____"
]
],
[
[
"We can add an entry:",
"_____no_output_____"
]
],
[
[
"# Append value with key into dictionary\n\nrelease_year_dict['Graduation'] = '2007'\nrelease_year_dict",
"_____no_output_____"
]
],
[
[
"We can delete an entry: ",
"_____no_output_____"
]
],
[
[
"# Delete entries by key\n\ndel(release_year_dict['Thriller'])\ndel(release_year_dict['Graduation'])\nrelease_year_dict",
"_____no_output_____"
]
],
[
[
" We can verify if an element is in the dictionary: ",
"_____no_output_____"
]
],
[
[
"# Verify the key is in the dictionary\n\n'The Bodyguard' in release_year_dict",
"_____no_output_____"
]
],
[
[
"<hr>",
"_____no_output_____"
],
[
"<h2 id=\"quiz\">Quiz on Dictionaries</h2>",
"_____no_output_____"
],
[
"<b>You will need this dictionary for the next two questions:</b>",
"_____no_output_____"
]
],
[
[
"# Question sample dictionary\n\nsoundtrack_dic = {\"The Bodyguard\":\"1992\", \"Saturday Night Fever\":\"1977\"}\nsoundtrack_dic ",
"_____no_output_____"
]
],
[
[
"a) In the dictionary <code>soundtrack_dict</code> what are the keys ?",
"_____no_output_____"
]
],
[
[
"# Write your code below and press Shift+Enter to execute\n\nsoundtrack_dic = {\"The Bodyguard\":\"1992\", \"Saturday Night Fever\":\"1977\"}\nsoundtrack_dic.keys()\n",
"_____no_output_____"
]
],
[
[
"Double-click __here__ for the solution.\n\n<!-- Your answer is below:\nsoundtrack_dic.keys() # The Keys \"The Bodyguard\" and \"Saturday Night Fever\" \n-->",
"_____no_output_____"
],
[
"b) In the dictionary <code>soundtrack_dict</code> what are the values ?",
"_____no_output_____"
]
],
[
[
"# Write your code below and press Shift+Enter to execute\n\n\n\nsoundtrack_dic.values()\n\n\n\n",
"_____no_output_____"
]
],
[
[
"Double-click __here__ for the solution.\n\n<!-- Your answer is below:\nsoundtrack_dic.values() # The values are \"1992\" and \"1977\"\n-->",
"_____no_output_____"
],
[
"<hr>",
"_____no_output_____"
],
[
"<b>You will need this dictionary for the following questions:</b>",
"_____no_output_____"
],
[
"The Albums <b>Back in Black</b>, <b>The Bodyguard</b> and <b>Thriller</b> have the following music recording sales in millions 50, 50 and 65 respectively:",
"_____no_output_____"
],
[
"a) Create a dictionary <code>album_sales_dict</code> where the keys are the album name and the sales in millions are the values. ",
"_____no_output_____"
]
],
[
[
"# Write your code below and press Shift+Enter to execute\n\nalbum_sales_dict = {'Back in Black':50,'The Bodyguard':50,'Thriller' : 65}\n\n \nalbum_sales_dict\n\n\n",
"_____no_output_____"
]
],
[
[
"Double-click __here__ for the solution.\n\n<!-- Your answer is below:\nalbum_sales_dict = {\"The Bodyguard\":50, \"Back in Black\":50, \"Thriller\":65}\n-->",
"_____no_output_____"
],
[
"b) Use the dictionary to find the total sales of <b>Thriller</b>:",
"_____no_output_____"
]
],
[
[
"# Write your code below and press Shift+Enter to execute\n\nalbum_sales_dict['Thriller']\n",
"_____no_output_____"
]
],
[
[
"Double-click __here__ for the solution.\n\n<!-- Your answer is below:\nalbum_sales_dict[\"Thriller\"]\n-->",
"_____no_output_____"
],
[
"c) Find the names of the albums from the dictionary using the method <code>keys</code>:",
"_____no_output_____"
]
],
[
[
"# Write your code below and press Shift+Enter to execute\n\n\nalbum_sales_dict.keys()",
"_____no_output_____"
],
[
"A = [1,4,'John']\nA[0]",
"_____no_output_____"
],
[
"A = (1,4,'John')\nA[1]",
"_____no_output_____"
],
[
"A = {1:4,'John':'x'}\nA['John']",
"_____no_output_____"
],
[
"\nA = [1,4,'John']\nA[0:2]\nA[::2]",
"_____no_output_____"
],
[
"A = (1,4,'John')\nA[::2]",
"_____no_output_____"
],
[
".a",
"_____no_output_____"
]
],
[
[
"Double-click __here__ for the solution.\n\n<!-- Your answer is below:\nalbum_sales_dict.keys()\n-->",
"_____no_output_____"
],
[
"d) Find the names of the recording sales from the dictionary using the method <code>values</code>:",
"_____no_output_____"
]
],
[
[
"# Write your code below and press Shift+Enter to execute\n\nalbum_sales_dict.values()",
"_____no_output_____"
]
],
[
[
"Double-click __here__ for the solution.\n\n<!-- Your answer is below:\nalbum_sales_dict.values()\n-->",
"_____no_output_____"
],
[
"<hr>\n<h2>The last exercise!</h2>\n<p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href=\"https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/\" target=\"_blank\">this article</a> to learn how to share your work.\n<hr>",
"_____no_output_____"
],
[
"<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n<h2>Get IBM Watson Studio free of charge!</h2>\n <p><a href=\"https://cocl.us/NotebooksPython101bottom\"><img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/BottomAd.png\" width=\"750\" align=\"center\"></a></p>\n</div>",
"_____no_output_____"
],
[
"<h3>About the Authors:</h3> \n<p><a href=\"https://www.linkedin.com/in/joseph-s-50398b136/\" target=\"_blank\">Joseph Santarcangelo</a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p>",
"_____no_output_____"
],
[
"Other contributors: <a href=\"www.linkedin.com/in/jiahui-mavis-zhou-a4537814a\">Mavis Zhou</a>",
"_____no_output_____"
],
[
"<hr>",
"_____no_output_____"
],
[
"<p>Copyright © 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href=\"https://cognitiveclass.ai/mit-license/\">MIT License</a>.</p>",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0c2a113601daf28cd8d2d3f8c6fe179679dfbde | 91,719 | ipynb | Jupyter Notebook | notebooks/02_single_stock_trade.ipynb | philtrade/gQuant | 08b2a82a257c234b92f097b925f25cab16fd0926 | [
"Apache-2.0"
] | null | null | null | notebooks/02_single_stock_trade.ipynb | philtrade/gQuant | 08b2a82a257c234b92f097b925f25cab16fd0926 | [
"Apache-2.0"
] | null | null | null | notebooks/02_single_stock_trade.ipynb | philtrade/gQuant | 08b2a82a257c234b92f097b925f25cab16fd0926 | [
"Apache-2.0"
] | null | null | null | 264.319885 | 81,476 | 0.914075 | [
[
[
"### gQuant Tutorial\nFirst import all the necessary modules.",
"_____no_output_____"
]
],
[
[
"import sys; sys.path.insert(0, '..')\nimport os\nimport warnings\nimport ipywidgets as widgets\nfrom gquant.dataframe_flow import TaskGraph\n\nwarnings.simplefilter(\"ignore\")",
"_____no_output_____"
]
],
[
[
"In this tutorial, we are going to use gQuant to do a simple quant job. The task is fully described in a yaml file",
"_____no_output_____"
]
],
[
[
"!head -n 31 ../task_example/simple_trade.yaml",
"- id: load_csv_data\n type: CsvStockLoader\n conf:\n path: ./data/stock_price_hist.csv.gz\n inputs: []\n- id: node_assetFilter\n type: AssetFilterNode\n conf:\n asset: 22123\n inputs: \n - load_csv_data\n- id: node_sort\n type: SortNode\n conf:\n keys: \n - asset\n - datetime\n inputs: \n - node_assetFilter\n- id: node_addReturn\n type: ReturnFeatureNode\n conf: {}\n inputs: \n - node_sort\n- id: node_ma_strategy\n type: MovingAverageStrategyNode\n conf:\n fast: 5\n slow: 10\n inputs: \n - node_addReturn\n"
]
],
[
[
"The yaml file is describing the computation task by a graph, we can visualize it",
"_____no_output_____"
]
],
[
[
"task_graph = TaskGraph.load_taskgraph('../task_example/simple_trade.yaml')\ntask_graph.draw(show='ipynb')",
"_____no_output_____"
]
],
[
[
"We define a method to organize the output images",
"_____no_output_____"
]
],
[
[
"def plot_figures(o, symbol):\n # format the figures\n figure_width = '1200px'\n figure_height = '400px'\n bar_figure = o[2]\n sharpe_number = o[0]\n cum_return = o[1]\n signals = o[3]\n\n bar_figure.layout.height = figure_height\n bar_figure.layout.width = figure_width\n cum_return.layout.height = figure_height\n cum_return.layout.width = figure_width\n cum_return.title = 'P & L %.3f' % (sharpe_number)\n bar_figure.marks[0].labels = [symbol]\n cum_return.marks[0].labels = [symbol]\n signals.layout.height = figure_height\n signals.layout.width = figure_width\n bar_figure.axes = [bar_figure.axes[0]]\n cum_return.axes = [cum_return.axes[0]]\n output = widgets.VBox([bar_figure, cum_return, signals])\n\n return output",
"_____no_output_____"
]
],
[
[
"We load the symbol name to symbol id mapping file:",
"_____no_output_____"
]
],
[
[
"node_stockSymbol = {\"id\": \"node_stockSymbol\",\n \"type\": \"StockNameLoader\",\n \"conf\": {\"path\": \"./data/security_master.csv.gz\"},\n \"inputs\": []}\nname_graph = TaskGraph([node_stockSymbol])\nlist_stocks = name_graph.run(outputs=['node_stockSymbol'])[0].to_pandas().set_index('asset_name').to_dict()['asset']",
"_____no_output_____"
]
],
[
[
"Evaluate the output nodes and plot the results:",
"_____no_output_____"
]
],
[
[
"symbol = 'REXX'\naction = \"load\" if os.path.isfile('./.cache/load_csv_data.hdf5') else \"save\"\no = task_graph.run(\n outputs=['node_sharpeRatio', 'node_cumlativeReturn',\n 'node_barplot', 'node_lineplot', 'load_csv_data'],\n replace={'load_csv_data': {action: True},\n 'node_barplot': {'conf': {\"points\": 300}},\n 'node_assetFilter':\n {'conf': {'asset': list_stocks[symbol]}}})\ncached_input = o[4]\nplot_figures(o, symbol)",
"_____no_output_____"
]
],
[
[
"Change the strategy parameters",
"_____no_output_____"
]
],
[
[
"o = task_graph.run(\n outputs=['node_sharpeRatio', 'node_cumlativeReturn',\n 'node_barplot', 'node_lineplot'],\n replace={'load_csv_data': {\"load\": cached_input},\n 'node_barplot': {'conf': {\"points\": 200}},\n 'node_ma_strategy': {'conf': {'fast': 1, 'slow': 10}},\n 'node_assetFilter': {'conf': {'asset': list_stocks[symbol]}}})\nfigure_combo = plot_figures(o, symbol)\nfigure_combo",
"_____no_output_____"
],
[
"add_stock_selector = widgets.Dropdown(options=list_stocks.keys(),\n value=None, description=\"Add stock\")\npara_selector = widgets.IntRangeSlider(value=[10, 30],\n min=3,\n max=60,\n step=1,\n description=\"MA:\",\n disabled=False,\n continuous_update=False,\n orientation='horizontal',\n readout=True)\n\n\ndef para_selection(*stocks):\n with out:\n symbol = add_stock_selector.value\n para1 = para_selector.value[0]\n para2 = para_selector.value[1]\n o = task_graph.run(\n outputs=['node_sharpeRatio', 'node_cumlativeReturn',\n 'node_barplot', 'node_lineplot'],\n replace={'load_csv_data': {\"load\": cached_input},\n 'node_barplot': {'conf': {\"points\": 200}},\n 'node_ma_strategy': {'conf': {'fast': para1, 'slow': para2}},\n 'node_assetFilter': {'conf': {'asset': list_stocks[symbol]}}})\n figure_combo = plot_figures(o, symbol)\n if (len(w.children) < 2):\n w.children = (w.children[0], figure_combo,)\n else:\n w.children[1].children[1].marks = figure_combo.children[1].marks\n w.children[1].children[2].marks = figure_combo.children[2].marks\n w.children[1].children[1].title = 'P & L %.3f' % (o[0])\n\n\nout = widgets.Output(layout={'border': '1px solid black'})\nadd_stock_selector.observe(para_selection, 'value')\npara_selector.observe(para_selection, 'value')\nselectors = widgets.HBox([add_stock_selector, para_selector])\nw = widgets.VBox([selectors])\nw",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0c2a2f9efb235b34ec4041b783fd4529358baa5 | 10,609 | ipynb | Jupyter Notebook | services/python-jupyter/scripts/CargaMetastore.ipynb | irvcaza/datalake4os | bfe9152e9527ecc3a4928e0d93df1118152025e2 | [
"Apache-2.0"
] | null | null | null | services/python-jupyter/scripts/CargaMetastore.ipynb | irvcaza/datalake4os | bfe9152e9527ecc3a4928e0d93df1118152025e2 | [
"Apache-2.0"
] | null | null | null | services/python-jupyter/scripts/CargaMetastore.ipynb | irvcaza/datalake4os | bfe9152e9527ecc3a4928e0d93df1118152025e2 | [
"Apache-2.0"
] | null | null | null | 33.466877 | 317 | 0.561127 | [
[
[
"# Cuaderno para cargar metadatos \n\nEste cuaderno toma un directorio de MinIO con estructura de datos abiertos y crea la definición para Hive-Metastore para cada una de las tablas",
"_____no_output_____"
],
[
"## Librerias",
"_____no_output_____"
]
],
[
[
"from minio import Minio\nimport pandas as pd\nfrom io import StringIO\nfrom io import BytesIO\nimport json\nfrom pyhive import hive\n",
"_____no_output_____"
]
],
[
[
"## Definicion de coneccion a MinIO",
"_____no_output_____"
]
],
[
[
"client = Minio(\n \"minio:9000\",\n access_key=\"minio\",\n secret_key=\"minio123\",\n secure=False\n )",
"_____no_output_____"
]
],
[
[
"Ruta al diccionario de correpondencia ",
"_____no_output_____"
]
],
[
[
"#corr_file = None\ncorr_file = 'correspondence.json'\n#corr_file = 'correspondence.csv'",
"_____no_output_____"
]
],
[
[
"## Funciones de ayuda",
"_____no_output_____"
]
],
[
[
"# Obtine el diccionario de correspondencia de un archivo \ndef get_corrr_dic(corr_file = None):\n corr_dic = {\"tables\":{},\"columns\":{}}\n if corr_file is None:\n # Si se pasa None a la funcion dara un diccionario en blanco \n pass\n elif corr_file.endswith(\".json\"):\n with open(corr_file, 'r') as myfile:\n corr_dic = json.loads(myfile.read())\n elif corr_file.endswith(\".csv\"):\n df = pd.read_csv(corr_file)\n df.apply(fill_dic,1,new_dic=corr_dic) \n else:\n raise Exception(\"Format not found, only .json and .csv\")\n return corr_dic\n \n# llena los datos en un diccionario de correpondencia con la informacion de un renglón \ndef fill_dic(row,new_dic):\n if row[\"type\"] == \"tables\":\n new_dic[\"tables\"][row[\"original_name\"]]=row[\"final_name\"]\n if row[\"type\"] == \"columns\":\n try:\n new_dic[\"columns\"][row[\"table\"]][row[\"original_name\"]]=row[\"final_name\"]\n except KeyError:\n new_dic[\"columns\"][row[\"table\"]] = {}\n new_dic[\"columns\"][row[\"table\"]][row[\"original_name\"]]=row[\"final_name\"]\n\n# Crea la definicion de la tabla en el directorio dado utilizando el diccionario de datos\ndef create_hive_table(client, bucket, directory_object_name):\n table_name = get_table_name(directory_object_name)\n col_def = get_col_def(client, bucket, directory_object_name)\n data_location = directory_object_name+\"conjunto_de_datos/\"\n # TODO: revisar si la tabla ya existe \n table_def = \"\"\" CREATE EXTERNAL TABLE {} ({})\n ROW FORMAT DELIMITED\n FIELDS TERMINATED BY ','\n LINES TERMINATED BY '\\\\n'\n LOCATION 's3a://{}/{}'\n TBLPROPERTIES ('skip.header.line.count'='1')\n \"\"\".format(table_name,col_def,bucket,data_location)\n return table_def\n\n\n# Funcion para establecer el nombre de la tabla, en una version peeliminar es el nombre del directorio, pero se puede refinar para lograr algo mas conciso \ndef get_table_name(directory_object_name):\n old_name = directory_object_name.split(\"/\")[-2]\n try:\n new_name = corr_dic[\"tables\"][old_name]\n except KeyError:\n new_name = old_name\n return new_name\n\ndef get_col_name(variable,table_name):\n try:\n new_name = corr_dic[\"columns\"][table_name][variable]\n except KeyError:\n new_name = variable\n return new_name\n \n# Fucion que crea la definicion de las variables, nombre de la variable y tipo \ndef get_col_def (client, bucket, directory_object_name):\n data_dictionary = get_data_dictionary(client, bucket, directory_object_name)\n # TODO: revisar si este ese el orden real de las columans de la tabla \n table_name = get_table_name(directory_object_name)\n names = [get_col_name(x,table_name) for x in data_dictionary[\"Variable\"]]\n types = [get_type(x) for x in data_dictionary[\"Tipo\"]]\n return \", \".join([\"{} {}\".format(x,y) for x,y in zip(names,types)])\n \n# Obtiene el diccionario de datos del formato de datos abiertos para definir las varibles\ndef get_data_dictionary(client, bucket, directory_object_name):\n dic_location = [obj.object_name for obj in client.list_objects(bucket, directory_object_name+\"diccionario_de_datos/\")] \n try:\n response = client.get_object(bucket, dic_location[0])\n s = str(response.read(),'latin-1')\n finally:\n response.close()\n response.release_conn()\n df = pd.read_csv(StringIO(s),names=[\"id\",\"Variable\",\"Descripcion\",\"Tipo\",\"valor\",\"etiqueta_rango\"], index_col=False)\n valid_entries = ~pd.to_numeric(df[\"id\"],errors='coerce',downcast='integer').isna()\n return df[valid_entries]\n\n# Convierte el tipo de columna del estilo datos abiertos al estilo SQL\ndef get_type(entrada):\n # TODO: Investigar los tipos de HIve y ver cual se justa mejor\n # TODO: agregar lognitudes \n lookup = {\"C\":\"STRING\",\"N\":\"DECIMAL\"}\n for key in lookup:\n if entrada.startswith(key):\n return lookup[key]\n raise Exception('Variable del tipo \"{}\" no encontrado'.format(entrada))\n\n# atraviesa todos los sub-directorios para crear un a tabla para cada uno\ndef create_dataset_tables(client,bucket,data_set):\n definitions = [create_hive_table(client, bucket, obj.object_name) for obj in client.list_objects(bucket, data_set) if obj.is_dir]\n return definitions\n\n# ToDo: ejecutar directamente en el Hive-Metastore y revisar si la definición ha sido exitosa \n ",
"_____no_output_____"
]
],
[
[
"## Ejemplo de ejecucion\n\nEn este ejemplo se toman los datos cargados en el cuaderno CargaObjetos.ipynb para crear las definiciones de tablas.\n\n**Nota:** En los datos a descargar hay un problema con el archivo *conjunto_de_datos_enigh_2018_ns_csv\\conjunto_de_datos_poblacion_enigh_2018_ns\\diccionario_de_datos\\diccionario_datos_poblacion_enigh_2018_ns.csv* en la linea *81* es necesario poner comillas para que el csv se detecte de manera correcta. \nSe puede hacer de manera automatica en el cuaderno DescargaDatos.ipynb",
"_____no_output_____"
]
],
[
[
"corr_dic = get_corrr_dic(corr_file)",
"_____no_output_____"
],
[
"sqls = create_dataset_tables(client,\"hive\",\"warehouse/conjunto_de_datos_enigh_2018_ns_csv/\")\n\nsqls += create_dataset_tables(client,\"hive\",\"warehouse/conjunto_de_datos_enigh2016_nueva_serie_csv/\")\n\nsqls += create_dataset_tables(client,\"hive\",\"warehouse/enigh_ncv_2014_csv/\")",
"_____no_output_____"
],
[
"print(sqls[0])",
"_____no_output_____"
],
[
"def get_connection(host='hive-server',port=10000,auth='NOSASL',database=\"default\"):\n conn = hive.Connection(host=host,port=port,auth=auth,database=database)\n return conn\n\ndef procc_SQL_list(list_sqls):\n conn = get_connection()\n cursor = conn.cursor()\n for sql in list_sqls:\n try:\n cursor.execute(sql)\n except:\n print(\" Error al crear la tabla \"+sql.split(\" \")[11])\n conn.close()\n\ndef drop_list_tables(list_sqls):\n conn = get_connection()\n cursor = conn.cursor()\n for sql in list_sqls:\n try:\n cursor.execute(\"DROP TABLE \"+sql.split(\" \")[3])\n except:\n print(\"Error al eliminar tabla \"+sql.split(\" \")[3])\n conn.close()\n\n \ndef list_tables():\n conn = get_connection()\n try:\n df = pd.read_sql(\"SHOW TABLES\", conn)\n except:\n print(\"Error al listar tablas\")\n conn.close()\n return df",
"_____no_output_____"
],
[
"procc_SQL_list(sqls)",
"_____no_output_____"
],
[
"list_tables()",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c2aac8c1b86e75dbd3d6041e1f365d090f701b | 323,964 | ipynb | Jupyter Notebook | Full presentation.ipynb | Journeyman08/BayesianSurvivalAnalysis | 9b7149c74123506c6532aa0ba4049c555f3bc083 | [
"MIT"
] | 3 | 2016-07-24T03:17:38.000Z | 2018-01-13T19:35:58.000Z | Full presentation.ipynb | Journeyman08/BayesianSurvivalAnalysis | 9b7149c74123506c6532aa0ba4049c555f3bc083 | [
"MIT"
] | null | null | null | Full presentation.ipynb | Journeyman08/BayesianSurvivalAnalysis | 9b7149c74123506c6532aa0ba4049c555f3bc083 | [
"MIT"
] | 2 | 2016-05-05T08:25:04.000Z | 2021-12-08T14:03:00.000Z | 175.971754 | 42,940 | 0.857055 | [
[
[
"import lifelines\nimport pymc as pm\nimport pyBMA\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom math import log\nfrom datetime import datetime\nimport pandas as pd\n%matplotlib inline ",
"_____no_output_____"
]
],
[
[
"The first step in any data analysis is acquiring and munging the data\n\nAn example data set can be found at:\n https://jakecoltman.gitlab.io/website/post/pydata/\n\nDownload the file output.txt and transform it into a format like below where the event column should be 0 if there's only one entry for an id, and 1 if there are two entries:\n\nEnd date = datetime.datetime(2016, 5, 3, 20, 36, 8, 92165)\n\nid,time_to_convert,age,male,event,search,brand",
"_____no_output_____"
]
],
[
[
"running_id = 0\noutput = [[0]]\nwith open(\"E:/output.txt\") as file_open:\n for row in file_open.read().split(\"\\n\"):\n cols = row.split(\",\")\n if cols[0] == output[-1][0]:\n output[-1].append(cols[1])\n output[-1].append(True)\n else:\n output.append(cols)\n output = output[1:]\n \nfor row in output:\n if len(row) == 6:\n row += [datetime(2016, 5, 3, 20, 36, 8, 92165), False]\noutput = output[1:-1]",
"_____no_output_____"
],
[
"def convert_to_days(dt):\n day_diff = dt / np.timedelta64(1, 'D')\n if day_diff == 0:\n return 23.0\n else: \n return day_diff\n\ndf = pd.DataFrame(output, columns=[\"id\", \"advert_time\", \"male\",\"age\",\"search\",\"brand\",\"conversion_time\",\"event\"])\ndf[\"lifetime\"] = pd.to_datetime(df[\"conversion_time\"]) - pd.to_datetime(df[\"advert_time\"])\ndf[\"lifetime\"] = df[\"lifetime\"].apply(convert_to_days)\ndf[\"male\"] = df[\"male\"].astype(int)\ndf[\"search\"] = df[\"search\"].astype(int)\ndf[\"brand\"] = df[\"brand\"].astype(int)\ndf[\"age\"] = df[\"age\"].astype(int)\ndf[\"event\"] = df[\"event\"].astype(int)\ndf = df.drop('advert_time', 1)\ndf = df.drop('conversion_time', 1)\ndf = df.set_index(\"id\")\ndf = df.dropna(thresh=2)\ndf.median()\ndf",
"_____no_output_____"
],
[
"###Parametric Bayes\n#Shout out to Cam Davidson-Pilon",
"_____no_output_____"
],
[
"## Example fully worked model using toy data\n## Adapted from http://blog.yhat.com/posts/estimating-user-lifetimes-with-pymc.html\n## Note that we've made some corrections\ncensor = np.array(df[\"event\"].apply(lambda x: 0 if x else 1).tolist())\nalpha = pm.Uniform(\"alpha\", 0,50) \nbeta = pm.Uniform(\"beta\", 0,50) \n\[email protected]\ndef survival(value=df[\"lifetime\"], alpha = alpha, beta = beta ):\n return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))\n\n\nmcmc = pm.MCMC([alpha, beta, survival ] )\nmcmc.sample(10000)",
" [-----------------100%-----------------] 10000 of 10000 complete in 24.9 sec"
],
[
"pm.Matplot.plot(mcmc)\nmcmc.trace(\"alpha\")[:]",
"Plotting alpha\nPlotting beta\n"
]
],
[
[
"Problems: \n\n 2 - Try to fit your data from section 1 \n 3 - Use the results to plot the distribution of the median\n --------\n 4 - Try adjusting the number of samples, the burn parameter and the amount of thinning to correct get good answers\n 5 - Try adjusting the prior and see how it affects the estimate\n --------\n 6 - Try to fit a different distribution to the data\n 7 - Compare answers\n Bonus - test the hypothesis that the true median is greater than a certain amount\n \nFor question 2, note that the median of a Weibull is:\n\n$$β(log 2)^{1/α}$$\n ",
"_____no_output_____"
]
],
[
[
"#Solution to question 4:\ndef weibull_median(alpha, beta):\n return beta * ((log(2)) ** ( 1 / alpha))\nplt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace(\"alpha\"), mcmc.trace(\"beta\"))])",
"_____no_output_____"
],
[
"#Solution to question 4:\n### Increasing the burn parameter allows us to discard results before convergence\n### Thinning the results removes autocorrelation\n\nmcmc = pm.MCMC([alpha, beta, survival ] )\nmcmc.sample(10000, burn = 3000, thin = 20)\n\npm.Matplot.plot(mcmc)",
" [-----------------100%-----------------] 10000 of 10000 complete in 22.0 secPlotting alpha\nPlotting beta\n"
],
[
"#Solution to Q5\n## Adjusting the priors impacts the overall result\n## If we give a looser, less informative prior then we end up with a broader, shorter distribution\n## If we give much more informative priors, then we get a tighter, taller distribution\n\ncensor = np.array(df[\"event\"].apply(lambda x: 0 if x else 1).tolist())\n\n## Note the narrowing of the prior\nalpha = pm.Normal(\"alpha\", 1.7, 10000) \nbeta = pm.Normal(\"beta\", 18.5, 10000) \n\n####Uncomment this to see the result of looser priors\n## Note this ends up pretty much the same as we're already very loose\n#alpha = pm.Uniform(\"alpha\", 0, 30) \n#beta = pm.Uniform(\"beta\", 0, 30) \n\[email protected]\ndef survival(value=df[\"lifetime\"], alpha = alpha, beta = beta ):\n return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))\n\nmcmc = pm.MCMC([alpha, beta, survival ] )\nmcmc.sample(10000, burn = 5000, thin = 20)\npm.Matplot.plot(mcmc)\n#plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace(\"alpha\"), mcmc.trace(\"beta\"))])",
" [-----------------100%-----------------] 10000 of 10000 complete in 18.7 secPlotting alpha\nPlotting beta\n"
],
[
"## Solution to bonus\n## Super easy to do in the Bayesian framework, all we need to do is look at what % of samples\n## meet our criteria\nmedians = [weibull_median(x[0], x[1]) for x in zip(mcmc.trace(\"alpha\"), mcmc.trace(\"beta\"))]\ntesting_value = 15.6\nnumber_of_greater_samples = sum([x >= testing_value for x in medians])\n100 * (number_of_greater_samples / len(medians))",
"_____no_output_____"
],
[
"#Cox model",
"_____no_output_____"
]
],
[
[
"If we want to look at covariates, we need a new approach. We'll use Cox proprtional hazards. More information here.",
"_____no_output_____"
]
],
[
[
"#Fitting solution\ncf = lifelines.CoxPHFitter()\ncf.fit(df, 'lifetime', event_col = 'event')\ncf.summary",
"C:\\Users\\j.coltman\\AppData\\Local\\Continuum\\Anaconda3\\lib\\site-packages\\lifelines\\fitters\\coxph_fitter.py:285: FutureWarning: sort(columns=....) is deprecated, use sort_values(by=.....)\n df.sort(duration_col, inplace=True)\n"
]
],
[
[
"Once we've fit the data, we need to do something useful with it. Try to do the following things:\n\n 1 - Plot the baseline survival function\n\n 2 - Predict the functions for a particular set of features\n\n 3 - Plot the survival function for two different set of features\n\n 4 - For your results in part 3 caculate how much more likely a death event is for one than the other for a given period of time",
"_____no_output_____"
]
],
[
[
"#Solution to 1\nfig, axis = plt.subplots(nrows=1, ncols=1)\ncf.baseline_survival_.plot(ax = axis, title = \"Baseline Survival\")",
"_____no_output_____"
],
[
"# Solution to prediction\nregressors = np.array([[1,45,0,0]])\nsurvival = cf.predict_survival_function(regressors)\nsurvival",
"_____no_output_____"
],
[
"#Solution to plotting multiple regressors\nfig, axis = plt.subplots(nrows=1, ncols=1, sharex=True)\nregressor1 = np.array([[1,45,0,1]])\nregressor2 = np.array([[1,23,1,1]])\nsurvival_1 = cf.predict_survival_function(regressor1)\nsurvival_2 = cf.predict_survival_function(regressor2)\nplt.plot(survival_1,label = \"32 year old male\")\nplt.plot(survival_2,label = \"46 year old female\")\nplt.legend(loc = \"lower left\")",
"_____no_output_____"
],
[
"#Difference in survival \nodds = survival_1 / survival_2\nplt.plot(odds, c = \"red\")",
"_____no_output_____"
]
],
[
[
"Model selection\n\nDifficult to do with classic tools (here)\n\nProblem:\n\n 1 - Calculate the BMA coefficient values\n\n 2 - Compare these results to past the lifelines results\n \n 3 - Try running with different priors",
"_____no_output_____"
]
],
[
[
"\n##Solution to 1\nfrom pyBMA import CoxPHFitter\nbmaCox = pyBMA.CoxPHFitter.CoxPHFitter()\nbmaCox.fit(df, \"lifetime\", event_col= \"event\", priors= [0.5]*4)",
"C:\\Users\\j.coltman\\AppData\\Local\\Continuum\\Anaconda3\\lib\\site-packages\\lifelines\\fitters\\coxph_fitter.py:285: FutureWarning: sort(columns=....) is deprecated, use sort_values(by=.....)\n df.sort(duration_col, inplace=True)\n"
],
[
"print(bmaCox.summary)",
" coef exp(coef)\nage -0.000122 0.999878\nbrand 0.000225 1.000225\nmale 0.917887 2.503993\nsearch -1.050679 0.349700\n"
],
[
"#Low probability for everything favours parsimonious models\nbmaCox = pyBMA.CoxPHFitter.CoxPHFitter()\nbmaCox.fit(df, \"lifetime\", event_col= \"event\", priors= [0.1]*4)\nprint(bmaCox.summary)",
"C:\\Users\\j.coltman\\AppData\\Local\\Continuum\\Anaconda3\\lib\\site-packages\\lifelines\\fitters\\coxph_fitter.py:285: FutureWarning: sort(columns=....) is deprecated, use sort_values(by=.....)\n df.sort(duration_col, inplace=True)\n"
],
[
"#Low probability for everything favours parsimonious models\nbmaCox = pyBMA.CoxPHFitter.CoxPHFitter()\nbmaCox.fit(df, \"lifetime\", event_col= \"event\", priors= [0.9]*4)\nprint(bmaCox.summary)",
" coef exp(coef)\nage -0.000014 0.999986\nbrand 0.000026 1.000026\nmale 0.917881 2.503978\nsearch -1.050672 0.349703\n"
],
[
"#Low probability for everything favours parsimonious models\nbmaCox = pyBMA.CoxPHFitter.CoxPHFitter()\nbmaCox.fit(df, \"lifetime\", event_col= \"event\", priors= [0.3, 0.9, 0.001, 0.3])\nprint(bmaCox.summary)",
" coef exp(coef)\nage -0.000967 0.999033\nbrand 0.000085 1.000085\nmale 0.917849 2.503900\nsearch -1.050716 0.349687\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0c2ae46fb6a2ac82fd15e46f4566b75a53ea973 | 201,940 | ipynb | Jupyter Notebook | ch_05/HOMEWORK_ch5.ipynb | calwhi/molinExercises | 7ec4f55963b31c8ac3964ca5884a8ee7958d76e6 | [
"MIT"
] | null | null | null | ch_05/HOMEWORK_ch5.ipynb | calwhi/molinExercises | 7ec4f55963b31c8ac3964ca5884a8ee7958d76e6 | [
"MIT"
] | null | null | null | ch_05/HOMEWORK_ch5.ipynb | calwhi/molinExercises | 7ec4f55963b31c8ac3964ca5884a8ee7958d76e6 | [
"MIT"
] | null | null | null | 670.89701 | 88,100 | 0.949128 | [
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\nfb = pd.read_csv('data/fb_stock_prices_2018.csv', index_col='date', parse_dates=True)\nquakes = pd.read_csv('data/earthquakes.csv')\ncovid = pd.read_csv('data/covid19_cases.csv').assign(\n date=lambda x: pd.to_datetime(x.dateRep, format='%d/%m/%Y')\n).set_index('date').replace(\n 'United_States_of_America', 'USA'\n).sort_index()['2020-01-18':'2020-09-18']\n\n",
"_____no_output_____"
],
[
"fb.close.rolling('20D').min().plot(\n title='Rolling 20 day Minimum Closing Price of Facebook Stock'\n)",
"_____no_output_____"
],
[
"differential = fb.open - fb.close\nax = differential.plot(kind='hist', density=True, alpha=0.3)\ndifferential.plot(\n kind='kde', color='blue', ax=ax, \n title='Facebook Stock Price\\'s Daily Change from Open to Close'\n)",
"_____no_output_____"
],
[
"quakes.query('parsed_place == \"Indonesia\"')[['mag', 'magType']]\\\n .groupby('magType').boxplot(layout=(1, 4), figsize=(15, 3))",
"_____no_output_____"
],
[
"fb.resample('1W').agg(\n dict(high='max', low='min')\n).assign(\n max_change_weekly=lambda x: x.high - x.low\n).max_change_weekly.plot(\n title='Difference between Weekly Maximum High Price\\n'\n 'and Weekly Minimum Low Price of Facebook Stock'\n)",
"_____no_output_____"
],
[
"fig, axes = plt.subplots(1, 3, figsize=(15, 5))\n\nnew_cases_rolling_average = covid.pivot_table(\n index=covid.index, columns=['countriesAndTerritories'], values='cases'\n).apply(lambda x: x.diff().rolling(14).mean())\n\nnew_cases_rolling_average[['China']].plot(ax=axes[0], color='red')\nnew_cases_rolling_average[['Italy', 'Spain']].plot(\n ax=axes[1], color=['magenta', 'cyan'],\n title='14-day rolling average of change in daily new COVID-19 cases\\n(source: ECDC)'\n)\nnew_cases_rolling_average[['Brazil', 'India', 'USA']].plot(ax=axes[2])",
"_____no_output_____"
],
[
"series = (fb.open - fb.close.shift())\nmonthly_effect = series.resample('1M').sum()\n\nfig, axes = plt.subplots(1, 2, figsize=(10, 3))\n\nseries.plot(\n ax=axes[0],\n title='After hours trading\\n(Open Price - Prior Day\\'s Close)'\n)\n\nmonthly_effect.index = monthly_effect.index.strftime('%b')\nmonthly_effect.plot(\n ax=axes[1],\n kind='bar', \n title='After hours trading monthly effect',\n color=np.where(monthly_effect >= 0, 'g', 'r'),\n rot=0\n)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c2be7e863e802e26ae27b9bdd02ec275ec9b9a | 201,381 | ipynb | Jupyter Notebook | docs/notebooks/swath_scatter.ipynb | PyOSP-devs/PyOSP | f3137b9881b2f0f8a703d64f06c4776694c82f40 | [
"Apache-2.0"
] | 19 | 2020-10-31T19:03:17.000Z | 2022-03-30T06:58:35.000Z | docs/notebooks/swath_scatter.ipynb | PyOSP-devs/PyOSP | f3137b9881b2f0f8a703d64f06c4776694c82f40 | [
"Apache-2.0"
] | 1 | 2021-05-18T16:17:43.000Z | 2021-05-19T16:42:28.000Z | docs/notebooks/swath_scatter.ipynb | PyOSP-devs/PyOSP | f3137b9881b2f0f8a703d64f06c4776694c82f40 | [
"Apache-2.0"
] | 7 | 2020-10-12T16:16:18.000Z | 2021-10-29T21:09:33.000Z | 1,157.362069 | 37,726 | 0.791316 | [
[
[
"# Swath profile with scatter plot\n\nIt is sometimes desired to plot the geological features with the swath profile. For example, plot the glacial points coupling with the mountain range profile [(Dortch et al. 2011)](https://www.sciencedirect.com/science/article/pii/S0169555X11004120). PyOSP provides simple workflow to plot such figures.\n\n## Step 1. Save features as points in shapefile\n\nAs shown below, we randomly drew four points within the range of mountain. These are saved as *checking_points.shp*.\n\n<img alt=\"homo_case\" src=\"https://i.imgur.com/AM5fnnC.png\" height=\"250\"/>\n\n## Step 2. Generate the swath object\n\n",
"_____no_output_____"
]
],
[
[
"import pyosp\n\nbaseline = pyosp.datasets.get_path(\"homo_baseline.shp\") # the path to baseline shapefile\nraster = pyosp.datasets.get_path(\"homo_mount.tif\") # the path to raster file\n\nelev = pyosp.Elev_curv(baseline, raster, width=100,\n min_elev=0.01,\n line_stepsize=3, cross_stepsize=None)",
"Processing: [#########################] 71 of 71 lineSteps"
]
],
[
[
"## Step 3. Plot the scatter with the swath profile\n\nIf we pass the points path to the method *profile_plot*, these points' distance and elevation information will be processed and plot in the same figure of swath profile.\n",
"_____no_output_____"
]
],
[
[
"from pyosp import point_coords\n\npointsPath = pyosp.datasets.get_path(\"checking_points.shp\")\n\nelev.profile_plot(points=pointsPath)",
"_____no_output_____"
]
],
[
[
"We can make some changes to the figure to make it clear. ",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n\nfig, ax = plt.subplots()\n# Note parameter \"color\" means color of swath profile, \"c\" and additional\n# parameters are passed to the Matplolib scatter function.\nelev.profile_plot(ax=ax, points=pointsPath, color=\"navy\", c=\"red\", marker=\"s\")\n",
"_____no_output_____"
]
],
[
[
"Note that method *profile_plot* has addotional parameters *start* and *end*. By defining those, user can limit the distance range to be plotted.\n\n**User can offer simple values to these parameters. Also, parameters can be defined by points on baseline to indicate the starting and ending locations, as shown below.**\n\n<img alt=\"homo_case\" src=\"https://i.imgur.com/9QUKGDT.png\" height=\"250\"/>\n\n",
"_____no_output_____"
]
],
[
[
"from pyosp import point_coords\n\npointsPath = pyosp.datasets.get_path(\"homo_start_end.shp\") # the path to the points\n\npointsCoords = point_coords(pointsPath)\n\n# we drew the ending points first here\nelev.profile_plot(start=pointsCoords[1], end=pointsCoords[0])",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c2c097e1fcfc1f5e0052c7b5108f1d3a10b33b | 73,659 | ipynb | Jupyter Notebook | NLP/real-or-not.ipynb | kartikay-99k/Notebooks-resources | 960da1e0ef8a541894bea01c80c97fed986c9208 | [
"MIT"
] | null | null | null | NLP/real-or-not.ipynb | kartikay-99k/Notebooks-resources | 960da1e0ef8a541894bea01c80c97fed986c9208 | [
"MIT"
] | null | null | null | NLP/real-or-not.ipynb | kartikay-99k/Notebooks-resources | 960da1e0ef8a541894bea01c80c97fed986c9208 | [
"MIT"
] | 1 | 2021-02-10T15:27:43.000Z | 2021-02-10T15:27:43.000Z | 72.785573 | 19,544 | 0.758536 | [
[
[
"import pandas as pd\nimport seaborn as sns\nimport numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"train=pd.read_csv('../input/nlp-getting-started/train.csv')",
"_____no_output_____"
],
[
"test=pd.read_csv('../input/nlp-getting-started/test.csv')",
"_____no_output_____"
],
[
"sample=pd.read_csv('../input/nlp-getting-started/sample_submission.csv')",
"_____no_output_____"
],
[
"train.head()",
"_____no_output_____"
],
[
"train.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 7613 entries, 0 to 7612\nData columns (total 5 columns):\nid 7613 non-null int64\nkeyword 7552 non-null object\nlocation 5080 non-null object\ntext 7613 non-null object\ntarget 7613 non-null int64\ndtypes: int64(2), object(3)\nmemory usage: 297.5+ KB\n"
],
[
"train.describe()",
"_____no_output_____"
],
[
"train.shape",
"_____no_output_____"
],
[
"train.head()",
"_____no_output_____"
],
[
"target = train['target']\nsns.countplot(target)\ntrain.drop(['target'], inplace =True,axis =1)",
"_____no_output_____"
],
[
"def concat_df(train, test):\n return pd.concat([train, test], sort=True).reset_index(drop=True)\n",
"_____no_output_____"
],
[
"df_all = concat_df(train, test)\nprint(train.shape)\nprint(test.shape)\nprint(df_all.shape)",
"(7613, 4)\n(3263, 4)\n(10876, 4)\n"
],
[
"df_all.head()",
"_____no_output_____"
],
[
"import tensorflow as tf\nfrom tensorflow.keras.preprocessing.text import Tokenizer\nfrom tensorflow.keras.preprocessing.sequence import pad_sequences",
"_____no_output_____"
],
[
"sentences = train['text']\n\ntrain_size = int(7613*0.8)\ntrain_sentences = sentences[:train_size]\ntrain_labels = target[:train_size]\n\ntest_sentences = sentences[train_size:]\ntest_labels = target[train_size:]\n\n\nvocab_size = 10000\nembedding_dim = 16\nmax_length = 120\ntrunc_type='post'\noov_tok = \"<OOV>\"\n\n\ntokenizer = Tokenizer(num_words = vocab_size, oov_token=oov_tok)\ntokenizer.fit_on_texts(train_sentences)\nword_index = tokenizer.word_index\nsequences = tokenizer.texts_to_sequences(train_sentences)\npadded = pad_sequences(sequences,maxlen=max_length, truncating=trunc_type)\n\ntesting_sequences = tokenizer.texts_to_sequences(test_sentences)\ntesting_padded = pad_sequences(testing_sequences,maxlen=max_length)",
"_____no_output_____"
],
[
"model = tf.keras.Sequential([\n tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),\n tf.keras.layers.GlobalAveragePooling1D(),\n tf.keras.layers.Dense(14, activation='relu'),\n tf.keras.layers.Dense(1, activation='sigmoid')\n])\n\n\nmodel.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])\nmodel.summary()",
"Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nembedding (Embedding) (None, 120, 16) 160000 \n_________________________________________________________________\nglobal_average_pooling1d (Gl (None, 16) 0 \n_________________________________________________________________\ndense (Dense) (None, 14) 238 \n_________________________________________________________________\ndense_1 (Dense) (None, 1) 15 \n=================================================================\nTotal params: 160,253\nTrainable params: 160,253\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"num_epochs = 10",
"_____no_output_____"
],
[
"train_labels = np.asarray(train_labels)\ntest_labels = np.asarray(test_labels)",
"_____no_output_____"
],
[
"history = model.fit(padded, train_labels, epochs=num_epochs, validation_data=(testing_padded, test_labels))",
"Train on 6090 samples, validate on 1523 samples\nEpoch 1/10\n6090/6090 [==============================] - 2s 279us/sample - loss: 0.6805 - accuracy: 0.5737 - val_loss: 0.6908 - val_accuracy: 0.5345\nEpoch 2/10\n6090/6090 [==============================] - 1s 126us/sample - loss: 0.6672 - accuracy: 0.5793 - val_loss: 0.6693 - val_accuracy: 0.5338\nEpoch 3/10\n6090/6090 [==============================] - 1s 126us/sample - loss: 0.6248 - accuracy: 0.6635 - val_loss: 0.6149 - val_accuracy: 0.6888\nEpoch 4/10\n6090/6090 [==============================] - 1s 129us/sample - loss: 0.5452 - accuracy: 0.7719 - val_loss: 0.5599 - val_accuracy: 0.7420\nEpoch 5/10\n6090/6090 [==============================] - 1s 132us/sample - loss: 0.4700 - accuracy: 0.8056 - val_loss: 0.5093 - val_accuracy: 0.7708\nEpoch 6/10\n6090/6090 [==============================] - 1s 129us/sample - loss: 0.4121 - accuracy: 0.8371 - val_loss: 0.4833 - val_accuracy: 0.7827\nEpoch 7/10\n6090/6090 [==============================] - 1s 132us/sample - loss: 0.3701 - accuracy: 0.8534 - val_loss: 0.4674 - val_accuracy: 0.7919\nEpoch 8/10\n6090/6090 [==============================] - 1s 134us/sample - loss: 0.3387 - accuracy: 0.8685 - val_loss: 0.4573 - val_accuracy: 0.7984\nEpoch 9/10\n6090/6090 [==============================] - 1s 134us/sample - loss: 0.3115 - accuracy: 0.8798 - val_loss: 0.4551 - val_accuracy: 0.8004\nEpoch 10/10\n6090/6090 [==============================] - 1s 140us/sample - loss: 0.2896 - accuracy: 0.8931 - val_loss: 0.4447 - val_accuracy: 0.8070\n"
],
[
"def plot(history,string):\n plt.plot(history.history[string])\n plt.plot(history.history['val_'+string])\n plt.xlabel(\"Epochs\")\n plt.ylabel(string)\n plt.legend([string, 'val_'+string])\n plt.show()\n",
"_____no_output_____"
],
[
"plot(history, \"accuracy\") ",
"_____no_output_____"
],
[
"plot(history, 'loss')",
"_____no_output_____"
],
[
"tokenizer_1 = Tokenizer(num_words = vocab_size, oov_token=oov_tok)\ntokenizer_1.fit_on_texts(train['text'])\n\nword_index = tokenizer_1.word_index\nsequences = tokenizer_1.texts_to_sequences(train['text'])\npadded = pad_sequences(sequences,maxlen=max_length, truncating=trunc_type)\n\ntrue_test_sentences = test['text']\ntesting_sequences = tokenizer_1.texts_to_sequences(true_test_sentences)\ntesting_padded = pad_sequences(testing_sequences,maxlen=max_length)",
"_____no_output_____"
],
[
"model_2 = tf.keras.Sequential([\n tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),\n tf.keras.layers.GlobalAveragePooling1D(),\n tf.keras.layers.Dense(24, activation='relu'),\n tf.keras.layers.Dense(1, activation='sigmoid')\n])\n\nmodel_2.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])\n",
"_____no_output_____"
],
[
"model_2.summary()",
"Model: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nembedding_1 (Embedding) (None, 120, 16) 160000 \n_________________________________________________________________\nglobal_average_pooling1d_1 ( (None, 16) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 24) 408 \n_________________________________________________________________\ndense_3 (Dense) (None, 1) 25 \n=================================================================\nTotal params: 160,433\nTrainable params: 160,433\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"target = np.asarray(target)",
"_____no_output_____"
],
[
"num_epochs = 20\nhistory = model_2.fit(padded, target, epochs=num_epochs, verbose=2)",
"Train on 7613 samples\nEpoch 1/20\n7613/7613 - 2s - loss: 0.6803 - accuracy: 0.5703\nEpoch 2/20\n7613/7613 - 1s - loss: 0.6411 - accuracy: 0.6339\nEpoch 3/20\n7613/7613 - 1s - loss: 0.5324 - accuracy: 0.7716\nEpoch 4/20\n7613/7613 - 1s - loss: 0.4425 - accuracy: 0.8145\nEpoch 5/20\n7613/7613 - 1s - loss: 0.3844 - accuracy: 0.8458\nEpoch 6/20\n7613/7613 - 1s - loss: 0.3429 - accuracy: 0.8630\nEpoch 7/20\n7613/7613 - 1s - loss: 0.3125 - accuracy: 0.8738\nEpoch 8/20\n7613/7613 - 1s - loss: 0.2867 - accuracy: 0.8894\nEpoch 9/20\n7613/7613 - 1s - loss: 0.2640 - accuracy: 0.8982\nEpoch 10/20\n7613/7613 - 1s - loss: 0.2452 - accuracy: 0.9090\nEpoch 11/20\n7613/7613 - 1s - loss: 0.2272 - accuracy: 0.9141\nEpoch 12/20\n7613/7613 - 1s - loss: 0.2155 - accuracy: 0.9195\nEpoch 13/20\n7613/7613 - 1s - loss: 0.2018 - accuracy: 0.9245\nEpoch 14/20\n7613/7613 - 1s - loss: 0.1879 - accuracy: 0.9317\nEpoch 15/20\n7613/7613 - 1s - loss: 0.1760 - accuracy: 0.9351\nEpoch 16/20\n7613/7613 - 1s - loss: 0.1631 - accuracy: 0.9405\nEpoch 17/20\n7613/7613 - 1s - loss: 0.1511 - accuracy: 0.9464\nEpoch 18/20\n7613/7613 - 1s - loss: 0.1434 - accuracy: 0.9501\nEpoch 19/20\n7613/7613 - 1s - loss: 0.1356 - accuracy: 0.9505\nEpoch 20/20\n7613/7613 - 1s - loss: 0.1271 - accuracy: 0.9548\n"
],
[
"output = model_2.predict(testing_padded)",
"_____no_output_____"
],
[
"predicted = pd.DataFrame(output, columns=['target'])",
"_____no_output_____"
],
[
"final_output = []\nfor val in predicted.target:\n if val > 0.5:\n final_output.append(1)\n else:\n final_output.append(0)\n",
"_____no_output_____"
],
[
"sample['target'] = final_output\n\nsample.to_csv(\"submission_1.csv\", index=False)\nsample.head()",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c2cc84fbd206fc7bf4872a9c642db6bb07b411 | 257,981 | ipynb | Jupyter Notebook | jupyter/.ipynb_checkpoints/test-violin-distribution-checkpoint.ipynb | jaimevalero/COVID-19 | ee5706b8a2d1bf7344b20771fa5deca1c9518379 | [
"CC-BY-4.0"
] | 9 | 2020-04-06T06:26:26.000Z | 2021-11-29T17:30:55.000Z | jupyter/.ipynb_checkpoints/test-violin-distribution-checkpoint.ipynb | jaimevalero/COVID-19 | ee5706b8a2d1bf7344b20771fa5deca1c9518379 | [
"CC-BY-4.0"
] | 3 | 2020-04-02T18:17:04.000Z | 2020-04-10T09:46:45.000Z | jupyter/.ipynb_checkpoints/test-violin-distribution-checkpoint.ipynb | jaimevalero/COVID-19 | ee5706b8a2d1bf7344b20771fa5deca1c9518379 | [
"CC-BY-4.0"
] | 2 | 2020-04-02T18:09:11.000Z | 2020-04-09T10:56:37.000Z | 152.291027 | 103,768 | 0.803303 | [
[
[
"import pandas as pd\n\n# Cargamos datos\nimport Loading_data \nfrom matplotlib import pyplot as plt \nimport warnings\nwarnings.filterwarnings('ignore')\nfrom IPython.display import display, HTML\n\nfrom Loading_data import Get_Comunidades_List as comunidades\nCOMUNIDADES = comunidades()\n",
"_____no_output_____"
],
[
"import numpy as np \nimport seaborn as sns\n\ndef Get_Single_Dimension(dimension ):\n df = pd.DataFrame()\n df_tmp = pd.DataFrame()\n array = []\n #\n for ca in COMUNIDADES:\n df_tmp = Loading_data.Get_Comunidad(ca)\n new = df_tmp[[dimension]].copy()\n new.rename(columns={dimension: ca}, inplace=True)\n array.append(new)\n #\n df = pd.concat(array, axis=1)\n return df\n\ndef plot_violin(dimension):\n \"\"\" Muestra la distribucion logaritmica por comunidades, de una dimension\"\"\"\n df = Get_Single_Dimension(dimension) \n # Ordenamos comunidades \n s = df.sum()\n df = df[s.sort_values(ascending=False).index[:]]\n\n # Pasamos a logaritmo\n df2 = np.log(df)\n df2.replace(-np.inf, np.nan, inplace=True)\n display(HTML(\"<h2>Comparativa de distribucion de '\" +dimension+ \"', en cada CC.AA </h2>\"))\n display(HTML(\"Distribuciones convertidas a logaritmos neperianos, para facilitar la comparación.\"))\n\n f, ax = plt.subplots()\n f.set_size_inches( 16, 10)\n\n f.suptitle(\"Comunidades con más, \" + dimension.lower())\n\n sns.violinplot(data=df2.iloc[:,:-7])\n\n f, ax = plt.subplots()\n f.set_size_inches( 16, 10)\n f.suptitle(\"Comunidades con menos, \" + dimension.lower()+\".\")\n sns.violinplot(data=df2.iloc[:,7:])\n return df\n\ndimension = 'Fallecidos hoy absoluto'\ndf = plot_violin(dimension)\ndf\n",
"_____no_output_____"
],
[
"def Debug_Get_Single_Dimension():\n dimension = 'Fallecidos hoy absoluto'\n df = Get_Single_Dimension(dimension)\n return df\n# Debug_Get_Single_Dimension()",
"_____no_output_____"
],
[
"display(HTML(\"<h2>Comparativa de distribucion de \" +dimension+ \" en cada CC.AA </h2>\"))\ndisplay(HTML(\"Distribuciones convertidas a logaritmos neperianos, para facilitar la comparación\"))\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
d0c2dfc1a93087ed68e32676407f99d4995d349d | 6,881 | ipynb | Jupyter Notebook | parse_dataset_labels/label_data_indian_set.ipynb | e-remington-lee/record_video | e33d9ab91e1194da3c94286344c309debd57dc02 | [
"Apache-2.0"
] | 2 | 2021-08-24T03:26:44.000Z | 2022-02-07T12:52:43.000Z | parse_dataset_labels/label_data_indian_set.ipynb | e-remington-lee/record_video | e33d9ab91e1194da3c94286344c309debd57dc02 | [
"Apache-2.0"
] | null | null | null | parse_dataset_labels/label_data_indian_set.ipynb | e-remington-lee/record_video | e33d9ab91e1194da3c94286344c309debd57dc02 | [
"Apache-2.0"
] | null | null | null | 33.241546 | 113 | 0.47653 | [
[
[
"import os\nimport random\nimport shutil\nfrom shutil import copyfile\nimport csv",
"_____no_output_____"
],
[
"root_dir = \"ISAFE MAIN DATABASE FOR PUBLIC/\"\ndata = \"Database/\"\nglobal_emotion_dir = \"emotions_5/\"\n# global_emotion_dir = \"emotions/\"",
"_____no_output_____"
],
[
"subject_list = os.path.join(root_dir, data)\nx = os.listdir(subject_list)\ncsv_file = \"ISAFE MAIN DATABASE FOR PUBLIC\\Annotations\\self-annotation.csv\"\n",
"_____no_output_____"
],
[
"labels_dictionary = {}\nwith open(csv_file) as rf:\n rows = csv.reader(rf, delimiter=\",\")\n for row in rows:\n labels_dictionary[row[0]]=row[1]",
"_____no_output_____"
],
[
"def parse_labels(directory, cut_images):\n li = os.listdir(directory)\n string_directory = str(directory)\n label_key = string_directory[-6:]\n if not \"S\" in label_key:\n label_key = \"S\"+label_key\n for item in li: \n path = os.path.join(directory,item)\n if os.path.isdir(path):\n parse_labels(path, cut_images)\n elif item.endswith(\".jpg\"):\n if cut_images:\n if (item.endswith(\"_0.jpg\") or item.endswith(\"_1.jpg\") or\n item.endswith(\"_2.jpg\") or\n item.endswith(\"_3.jpg\") or\n item.endswith(\"_4.jpg\") or\n item.endswith(\"_5.jpg\") or\n item.endswith(\"_6.jpg\") or\n item.endswith(\"_7.jpg\") or\n item.endswith(\"_8.jpg\") or\n item.endswith(\"_9.jpg\") or\n item.endswith(\"_10.jpg\") or\n item.endswith(\"_11.jpg\") or\n item.endswith(\"_12.jpg\") or\n item.endswith(\"_13.jpg\") or\n item.endswith(\"_14.jpg\")):\n continue\n randint = random.random()\n \n whydoineedtodothisshit = label_key.replace(\"\\\\\", \"/\")\n emotion = labels_dictionary[whydoineedtodothisshit]\n identifier = label_key.replace(\"\\\\\", \"_\")\n pic_id = identifier+item\n\n # randomizes the images in real time\n if randint < 0.8:\n train_test_validate = \"train\"\n elif randint >= 0.8 and randint < 0.9:\n train_test_validate = \"validation\"\n else: \n train_test_validate = \"test\" \n\n if emotion == \"1\":\n emotion_ = \"joy\"\n copy_files(item, pic_id, directory, emotion_, global_emotion_dir, train_test_validate)\n elif emotion == \"2\":\n emotion_ = \"sadness\"\n copy_files(item, pic_id, directory, emotion_, global_emotion_dir, train_test_validate)\n elif emotion == \"3\":\n # 3 = surprise\n emotion_ = \"surprise_fear\"\n copy_files(item, pic_id, directory, emotion_, global_emotion_dir, train_test_validate)\n elif emotion == \"4\":\n # 4 = disgust\n emotion_ = \"anger_disgust\"\n copy_files(item, pic_id, directory, emotion_, global_emotion_dir, train_test_validate)\n elif emotion == \"5\":\n # 5=fear\n emotion_ = \"surprise_fear\"\n copy_files(item, pic_id, directory, emotion_, global_emotion_dir, train_test_validate)\n elif emotion == \"6\":\n #6=anger\n emotion_ = \"anger_disgust\"\n copy_files(item, pic_id, directory, emotion_, global_emotion_dir, train_test_validate)\n # elif emotion == \"7\":\n # unceratin, I do not have a classification for this\n # emotion_ = \"joy\"\n # copy_files(item, pic_id, directory, emotion_, global_emotion_dir, train_test_validate)\n elif emotion == \"8\":\n emotion_ = \"neutral\"\n copy_files(item, pic_id, directory, emotion_, global_emotion_dir, train_test_validate)\n else:\n continue\n ",
"_____no_output_____"
],
[
"def copy_files(pic, pic_id, orignal_dir, emotion_, global_emotion_dir, ttv):\n file_ = os.path.join(orignal_dir, pic)\n ttv_dir = os.path.join(global_emotion_dir, ttv)\n emotion_dir = os.path.join(ttv_dir, emotion_)\n dest_ = os.path.join(emotion_dir, pic_id)\n if os.path.getsize(file_) != 0:\n copyfile(file_, dest_)\n ",
"_____no_output_____"
],
[
"for root, dirs, files in os.walk(\"emotions_copy_test_dir\"):\n for x in files:\n os.remove(os.path.join(root, x))",
"_____no_output_____"
],
[
"parse_labels(subject_list, True)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c2ef07dfc5dd2f245cb9e114621d9d0a37590c | 703,855 | ipynb | Jupyter Notebook | How_much_samples_is_enough_for_transfer_learning_same_steps_per_epoch_InceptionResNetV2.ipynb | aljeshishe/FrameworkBenchmarks | 2ad15cc1b9b00ebbf4d4f60849165b3184e45a2b | [
"BSD-3-Clause"
] | null | null | null | How_much_samples_is_enough_for_transfer_learning_same_steps_per_epoch_InceptionResNetV2.ipynb | aljeshishe/FrameworkBenchmarks | 2ad15cc1b9b00ebbf4d4f60849165b3184e45a2b | [
"BSD-3-Clause"
] | null | null | null | How_much_samples_is_enough_for_transfer_learning_same_steps_per_epoch_InceptionResNetV2.ipynb | aljeshishe/FrameworkBenchmarks | 2ad15cc1b9b00ebbf4d4f60849165b3184e45a2b | [
"BSD-3-Clause"
] | null | null | null | 77.808424 | 10,344 | 0.548197 | [
[
[
"<a href=\"https://colab.research.google.com/github/aljeshishe/FrameworkBenchmarks/blob/master/How_much_samples_is_enough_for_transfer_learning_same_steps_per_epoch_InceptionResNetV2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"pip install kaggle -q",
"_____no_output_____"
],
[
"import json\ntoken = {'username':'aljeshishe','key':'32deca82aa1c29fbaeadcce2bf470af4'}\nwith open('kaggle.json', 'w') as file:\n json.dump(token, file)",
"_____no_output_____"
],
[
"!mkdir ~/.kaggle\n!mv kaggle.json ~/.kaggle/kaggle.json\n!chmod 600 ~/.kaggle/kaggle.json",
"_____no_output_____"
],
[
"!kaggle competitions download -c dogs-vs-cats",
"Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.6 / client 1.5.4)\nDownloading sampleSubmission.csv to /content\n\r 0% 0.00/86.8k [00:00<?, ?B/s]\n100% 86.8k/86.8k [00:00<00:00, 32.6MB/s]\nDownloading train.zip to /content\n 96% 521M/543M [00:06<00:00, 77.6MB/s]\n100% 543M/543M [00:06<00:00, 82.7MB/s]\nDownloading test1.zip to /content\n 95% 258M/271M [00:03<00:00, 59.3MB/s]\n100% 271M/271M [00:03<00:00, 72.9MB/s]\n"
],
[
"!unzip test1.zip\n!unzip train.zip",
"\u001b[1;30;43mStreaming output truncated to the last 5000 lines.\u001b[0m\n inflating: train/dog.55.jpg \n inflating: train/dog.550.jpg \n inflating: train/dog.5500.jpg \n inflating: train/dog.5501.jpg \n inflating: train/dog.5502.jpg \n inflating: train/dog.5503.jpg \n inflating: train/dog.5504.jpg \n inflating: train/dog.5505.jpg \n inflating: train/dog.5506.jpg \n inflating: train/dog.5507.jpg \n inflating: train/dog.5508.jpg \n inflating: train/dog.5509.jpg \n inflating: train/dog.551.jpg \n inflating: train/dog.5510.jpg \n inflating: train/dog.5511.jpg \n inflating: train/dog.5512.jpg \n inflating: train/dog.5513.jpg \n inflating: train/dog.5514.jpg \n inflating: train/dog.5515.jpg \n inflating: train/dog.5516.jpg \n inflating: train/dog.5517.jpg \n inflating: train/dog.5518.jpg \n inflating: train/dog.5519.jpg \n inflating: train/dog.552.jpg \n inflating: train/dog.5520.jpg \n inflating: train/dog.5521.jpg \n inflating: train/dog.5522.jpg \n inflating: train/dog.5523.jpg \n inflating: train/dog.5524.jpg \n inflating: train/dog.5525.jpg \n inflating: train/dog.5526.jpg \n inflating: train/dog.5527.jpg \n inflating: train/dog.5528.jpg \n inflating: train/dog.5529.jpg \n inflating: train/dog.553.jpg \n inflating: train/dog.5530.jpg \n inflating: train/dog.5531.jpg \n inflating: train/dog.5532.jpg \n inflating: train/dog.5533.jpg \n inflating: train/dog.5534.jpg \n inflating: train/dog.5535.jpg \n inflating: train/dog.5536.jpg \n inflating: train/dog.5537.jpg \n inflating: train/dog.5538.jpg \n inflating: train/dog.5539.jpg \n inflating: train/dog.554.jpg \n inflating: train/dog.5540.jpg \n inflating: train/dog.5541.jpg \n inflating: train/dog.5542.jpg \n inflating: train/dog.5543.jpg \n inflating: train/dog.5544.jpg \n inflating: train/dog.5545.jpg \n inflating: train/dog.5546.jpg \n inflating: train/dog.5547.jpg \n inflating: train/dog.5548.jpg \n inflating: train/dog.5549.jpg \n inflating: train/dog.555.jpg \n inflating: train/dog.5550.jpg \n inflating: train/dog.5551.jpg \n inflating: train/dog.5552.jpg \n inflating: train/dog.5553.jpg \n inflating: train/dog.5554.jpg \n inflating: train/dog.5555.jpg \n inflating: train/dog.5556.jpg \n inflating: train/dog.5557.jpg \n inflating: train/dog.5558.jpg \n inflating: train/dog.5559.jpg \n inflating: train/dog.556.jpg \n inflating: train/dog.5560.jpg \n inflating: train/dog.5561.jpg \n inflating: train/dog.5562.jpg \n inflating: train/dog.5563.jpg \n inflating: train/dog.5564.jpg \n inflating: train/dog.5565.jpg \n inflating: train/dog.5566.jpg \n inflating: train/dog.5567.jpg \n inflating: train/dog.5568.jpg \n inflating: train/dog.5569.jpg \n inflating: train/dog.557.jpg \n inflating: train/dog.5570.jpg \n inflating: train/dog.5571.jpg \n inflating: train/dog.5572.jpg \n inflating: train/dog.5573.jpg \n inflating: train/dog.5574.jpg \n inflating: train/dog.5575.jpg \n inflating: train/dog.5576.jpg \n inflating: train/dog.5577.jpg \n inflating: train/dog.5578.jpg \n inflating: train/dog.5579.jpg \n inflating: train/dog.558.jpg \n inflating: train/dog.5580.jpg \n inflating: train/dog.5581.jpg \n inflating: train/dog.5582.jpg \n inflating: train/dog.5583.jpg \n inflating: train/dog.5584.jpg \n inflating: train/dog.5585.jpg \n inflating: train/dog.5586.jpg \n inflating: train/dog.5587.jpg \n inflating: train/dog.5588.jpg \n inflating: train/dog.5589.jpg \n inflating: train/dog.559.jpg \n inflating: train/dog.5590.jpg \n inflating: train/dog.5591.jpg \n inflating: train/dog.5592.jpg \n inflating: train/dog.5593.jpg \n inflating: train/dog.5594.jpg \n inflating: train/dog.5595.jpg \n inflating: train/dog.5596.jpg \n inflating: train/dog.5597.jpg \n inflating: train/dog.5598.jpg \n inflating: train/dog.5599.jpg \n inflating: train/dog.56.jpg \n inflating: train/dog.560.jpg \n inflating: train/dog.5600.jpg \n inflating: train/dog.5601.jpg \n inflating: train/dog.5602.jpg \n inflating: train/dog.5603.jpg \n inflating: train/dog.5604.jpg \n inflating: train/dog.5605.jpg \n inflating: train/dog.5606.jpg \n inflating: train/dog.5607.jpg \n inflating: train/dog.5608.jpg \n inflating: train/dog.5609.jpg \n inflating: train/dog.561.jpg \n inflating: train/dog.5610.jpg \n inflating: train/dog.5611.jpg \n inflating: train/dog.5612.jpg \n inflating: train/dog.5613.jpg \n inflating: train/dog.5614.jpg \n inflating: train/dog.5615.jpg \n inflating: train/dog.5616.jpg \n inflating: train/dog.5617.jpg \n inflating: train/dog.5618.jpg \n inflating: train/dog.5619.jpg \n inflating: train/dog.562.jpg \n inflating: train/dog.5620.jpg \n inflating: train/dog.5621.jpg \n inflating: train/dog.5622.jpg \n inflating: train/dog.5623.jpg \n inflating: train/dog.5624.jpg \n inflating: train/dog.5625.jpg \n inflating: train/dog.5626.jpg \n inflating: train/dog.5627.jpg \n inflating: train/dog.5628.jpg \n inflating: train/dog.5629.jpg \n inflating: train/dog.563.jpg \n inflating: train/dog.5630.jpg \n inflating: train/dog.5631.jpg \n inflating: train/dog.5632.jpg \n inflating: train/dog.5633.jpg \n inflating: train/dog.5634.jpg \n inflating: train/dog.5635.jpg \n inflating: train/dog.5636.jpg \n inflating: train/dog.5637.jpg \n inflating: train/dog.5638.jpg \n inflating: train/dog.5639.jpg \n inflating: train/dog.564.jpg \n inflating: train/dog.5640.jpg \n inflating: train/dog.5641.jpg \n inflating: train/dog.5642.jpg \n inflating: train/dog.5643.jpg \n inflating: train/dog.5644.jpg \n inflating: train/dog.5645.jpg \n inflating: train/dog.5646.jpg \n inflating: train/dog.5647.jpg \n inflating: train/dog.5648.jpg \n inflating: train/dog.5649.jpg \n inflating: train/dog.565.jpg \n inflating: train/dog.5650.jpg \n inflating: train/dog.5651.jpg \n inflating: train/dog.5652.jpg \n inflating: train/dog.5653.jpg \n inflating: train/dog.5654.jpg \n inflating: train/dog.5655.jpg \n inflating: train/dog.5656.jpg \n inflating: train/dog.5657.jpg \n inflating: train/dog.5658.jpg \n inflating: train/dog.5659.jpg \n inflating: train/dog.566.jpg \n inflating: train/dog.5660.jpg \n inflating: train/dog.5661.jpg \n inflating: train/dog.5662.jpg \n inflating: train/dog.5663.jpg \n inflating: train/dog.5664.jpg \n inflating: train/dog.5665.jpg \n inflating: train/dog.5666.jpg \n inflating: train/dog.5667.jpg \n inflating: train/dog.5668.jpg \n inflating: train/dog.5669.jpg \n inflating: train/dog.567.jpg \n inflating: train/dog.5670.jpg \n inflating: train/dog.5671.jpg \n inflating: train/dog.5672.jpg \n inflating: train/dog.5673.jpg \n inflating: train/dog.5674.jpg \n inflating: train/dog.5675.jpg \n inflating: train/dog.5676.jpg \n inflating: train/dog.5677.jpg \n inflating: train/dog.5678.jpg \n inflating: train/dog.5679.jpg \n inflating: train/dog.568.jpg \n inflating: train/dog.5680.jpg \n inflating: train/dog.5681.jpg \n inflating: train/dog.5682.jpg \n inflating: train/dog.5683.jpg \n inflating: train/dog.5684.jpg \n inflating: train/dog.5685.jpg \n inflating: train/dog.5686.jpg \n inflating: train/dog.5687.jpg \n inflating: train/dog.5688.jpg \n inflating: train/dog.5689.jpg \n inflating: train/dog.569.jpg \n inflating: train/dog.5690.jpg \n inflating: train/dog.5691.jpg \n inflating: train/dog.5692.jpg \n inflating: train/dog.5693.jpg \n inflating: train/dog.5694.jpg \n inflating: train/dog.5695.jpg \n inflating: train/dog.5696.jpg \n inflating: train/dog.5697.jpg \n inflating: train/dog.5698.jpg \n inflating: train/dog.5699.jpg \n inflating: train/dog.57.jpg \n inflating: train/dog.570.jpg \n inflating: train/dog.5700.jpg \n inflating: train/dog.5701.jpg \n inflating: train/dog.5702.jpg \n inflating: train/dog.5703.jpg \n inflating: train/dog.5704.jpg \n inflating: train/dog.5705.jpg \n inflating: train/dog.5706.jpg \n inflating: train/dog.5707.jpg \n inflating: train/dog.5708.jpg \n inflating: train/dog.5709.jpg \n inflating: train/dog.571.jpg \n inflating: train/dog.5710.jpg \n inflating: train/dog.5711.jpg \n inflating: train/dog.5712.jpg \n inflating: train/dog.5713.jpg \n inflating: train/dog.5714.jpg \n inflating: train/dog.5715.jpg \n inflating: train/dog.5716.jpg \n inflating: train/dog.5717.jpg \n inflating: train/dog.5718.jpg \n inflating: train/dog.5719.jpg \n inflating: train/dog.572.jpg \n inflating: train/dog.5720.jpg \n inflating: train/dog.5721.jpg \n inflating: train/dog.5722.jpg \n inflating: train/dog.5723.jpg \n inflating: train/dog.5724.jpg \n inflating: train/dog.5725.jpg \n inflating: train/dog.5726.jpg \n inflating: train/dog.5727.jpg \n inflating: train/dog.5728.jpg \n inflating: train/dog.5729.jpg \n inflating: train/dog.573.jpg \n inflating: train/dog.5730.jpg \n inflating: train/dog.5731.jpg \n inflating: train/dog.5732.jpg \n inflating: train/dog.5733.jpg \n inflating: train/dog.5734.jpg \n inflating: train/dog.5735.jpg \n inflating: train/dog.5736.jpg \n inflating: train/dog.5737.jpg \n inflating: train/dog.5738.jpg \n inflating: train/dog.5739.jpg \n inflating: train/dog.574.jpg \n inflating: train/dog.5740.jpg \n inflating: train/dog.5741.jpg \n inflating: train/dog.5742.jpg \n inflating: train/dog.5743.jpg \n inflating: train/dog.5744.jpg \n inflating: train/dog.5745.jpg \n inflating: train/dog.5746.jpg \n inflating: train/dog.5747.jpg \n inflating: train/dog.5748.jpg \n inflating: train/dog.5749.jpg \n inflating: train/dog.575.jpg \n inflating: train/dog.5750.jpg \n inflating: train/dog.5751.jpg \n inflating: train/dog.5752.jpg \n inflating: train/dog.5753.jpg \n inflating: train/dog.5754.jpg \n inflating: train/dog.5755.jpg \n inflating: train/dog.5756.jpg \n inflating: train/dog.5757.jpg \n inflating: train/dog.5758.jpg \n inflating: train/dog.5759.jpg \n inflating: train/dog.576.jpg \n inflating: train/dog.5760.jpg \n inflating: train/dog.5761.jpg \n inflating: train/dog.5762.jpg \n inflating: train/dog.5763.jpg \n inflating: train/dog.5764.jpg \n inflating: train/dog.5765.jpg \n inflating: train/dog.5766.jpg \n inflating: train/dog.5767.jpg \n inflating: train/dog.5768.jpg \n inflating: train/dog.5769.jpg \n inflating: train/dog.577.jpg \n inflating: train/dog.5770.jpg \n inflating: train/dog.5771.jpg \n inflating: train/dog.5772.jpg \n inflating: train/dog.5773.jpg \n inflating: train/dog.5774.jpg \n inflating: train/dog.5775.jpg \n inflating: train/dog.5776.jpg \n inflating: train/dog.5777.jpg \n inflating: train/dog.5778.jpg \n inflating: train/dog.5779.jpg \n inflating: train/dog.578.jpg \n inflating: train/dog.5780.jpg \n inflating: train/dog.5781.jpg \n inflating: train/dog.5782.jpg \n inflating: train/dog.5783.jpg \n inflating: train/dog.5784.jpg \n inflating: train/dog.5785.jpg \n inflating: train/dog.5786.jpg \n inflating: train/dog.5787.jpg \n inflating: train/dog.5788.jpg \n inflating: train/dog.5789.jpg \n inflating: train/dog.579.jpg \n inflating: train/dog.5790.jpg \n inflating: train/dog.5791.jpg \n inflating: train/dog.5792.jpg \n inflating: train/dog.5793.jpg \n inflating: train/dog.5794.jpg \n inflating: train/dog.5795.jpg \n inflating: train/dog.5796.jpg \n inflating: train/dog.5797.jpg \n inflating: train/dog.5798.jpg \n inflating: train/dog.5799.jpg \n inflating: train/dog.58.jpg \n inflating: train/dog.580.jpg \n inflating: train/dog.5800.jpg \n inflating: train/dog.5801.jpg \n inflating: train/dog.5802.jpg \n inflating: train/dog.5803.jpg \n inflating: train/dog.5804.jpg \n inflating: train/dog.5805.jpg \n inflating: train/dog.5806.jpg \n inflating: train/dog.5807.jpg \n inflating: train/dog.5808.jpg \n inflating: train/dog.5809.jpg \n inflating: train/dog.581.jpg \n inflating: train/dog.5810.jpg \n inflating: train/dog.5811.jpg \n inflating: train/dog.5812.jpg \n inflating: train/dog.5813.jpg \n inflating: train/dog.5814.jpg \n inflating: train/dog.5815.jpg \n inflating: train/dog.5816.jpg \n inflating: train/dog.5817.jpg \n inflating: train/dog.5818.jpg \n inflating: train/dog.5819.jpg \n inflating: train/dog.582.jpg \n inflating: train/dog.5820.jpg \n inflating: train/dog.5821.jpg \n inflating: train/dog.5822.jpg \n inflating: train/dog.5823.jpg \n inflating: train/dog.5824.jpg \n inflating: train/dog.5825.jpg \n inflating: train/dog.5826.jpg \n inflating: train/dog.5827.jpg \n inflating: train/dog.5828.jpg \n inflating: train/dog.5829.jpg \n inflating: train/dog.583.jpg \n inflating: train/dog.5830.jpg \n inflating: train/dog.5831.jpg \n inflating: train/dog.5832.jpg \n inflating: train/dog.5833.jpg \n inflating: train/dog.5834.jpg \n inflating: train/dog.5835.jpg \n inflating: train/dog.5836.jpg \n inflating: train/dog.5837.jpg \n inflating: train/dog.5838.jpg \n inflating: train/dog.5839.jpg \n inflating: train/dog.584.jpg \n inflating: train/dog.5840.jpg \n inflating: train/dog.5841.jpg \n inflating: train/dog.5842.jpg \n inflating: train/dog.5843.jpg \n inflating: train/dog.5844.jpg \n inflating: train/dog.5845.jpg \n inflating: train/dog.5846.jpg \n inflating: train/dog.5847.jpg \n inflating: train/dog.5848.jpg \n inflating: train/dog.5849.jpg \n inflating: train/dog.585.jpg \n inflating: train/dog.5850.jpg \n inflating: train/dog.5851.jpg \n inflating: train/dog.5852.jpg \n inflating: train/dog.5853.jpg \n inflating: train/dog.5854.jpg \n inflating: train/dog.5855.jpg \n inflating: train/dog.5856.jpg \n inflating: train/dog.5857.jpg \n inflating: train/dog.5858.jpg \n inflating: train/dog.5859.jpg \n inflating: train/dog.586.jpg \n inflating: train/dog.5860.jpg \n inflating: train/dog.5861.jpg \n inflating: train/dog.5862.jpg \n inflating: train/dog.5863.jpg \n inflating: train/dog.5864.jpg \n inflating: train/dog.5865.jpg \n inflating: train/dog.5866.jpg \n inflating: train/dog.5867.jpg \n inflating: train/dog.5868.jpg \n inflating: train/dog.5869.jpg \n inflating: train/dog.587.jpg \n inflating: train/dog.5870.jpg \n inflating: train/dog.5871.jpg \n inflating: train/dog.5872.jpg \n inflating: train/dog.5873.jpg \n inflating: train/dog.5874.jpg \n inflating: train/dog.5875.jpg \n inflating: train/dog.5876.jpg \n inflating: train/dog.5877.jpg \n inflating: train/dog.5878.jpg \n inflating: train/dog.5879.jpg \n inflating: train/dog.588.jpg \n inflating: train/dog.5880.jpg \n inflating: train/dog.5881.jpg \n inflating: train/dog.5882.jpg \n inflating: train/dog.5883.jpg \n inflating: train/dog.5884.jpg \n inflating: train/dog.5885.jpg \n inflating: train/dog.5886.jpg \n inflating: train/dog.5887.jpg \n inflating: train/dog.5888.jpg \n inflating: train/dog.5889.jpg \n inflating: train/dog.589.jpg \n inflating: train/dog.5890.jpg \n inflating: train/dog.5891.jpg \n inflating: train/dog.5892.jpg \n inflating: train/dog.5893.jpg \n inflating: train/dog.5894.jpg \n inflating: train/dog.5895.jpg \n inflating: train/dog.5896.jpg \n inflating: train/dog.5897.jpg \n inflating: train/dog.5898.jpg \n inflating: train/dog.5899.jpg \n inflating: train/dog.59.jpg \n inflating: train/dog.590.jpg \n inflating: train/dog.5900.jpg \n inflating: train/dog.5901.jpg \n inflating: train/dog.5902.jpg \n inflating: train/dog.5903.jpg \n inflating: train/dog.5904.jpg \n inflating: train/dog.5905.jpg \n inflating: train/dog.5906.jpg \n inflating: train/dog.5907.jpg \n inflating: train/dog.5908.jpg \n inflating: train/dog.5909.jpg \n inflating: train/dog.591.jpg \n inflating: train/dog.5910.jpg \n inflating: train/dog.5911.jpg \n inflating: train/dog.5912.jpg \n inflating: train/dog.5913.jpg \n inflating: train/dog.5914.jpg \n inflating: train/dog.5915.jpg \n inflating: train/dog.5916.jpg \n inflating: train/dog.5917.jpg \n inflating: train/dog.5918.jpg \n inflating: train/dog.5919.jpg \n inflating: train/dog.592.jpg \n inflating: train/dog.5920.jpg \n inflating: train/dog.5921.jpg \n inflating: train/dog.5922.jpg \n inflating: train/dog.5923.jpg \n inflating: train/dog.5924.jpg \n inflating: train/dog.5925.jpg \n inflating: train/dog.5926.jpg \n inflating: train/dog.5927.jpg \n inflating: train/dog.5928.jpg \n inflating: train/dog.5929.jpg \n inflating: train/dog.593.jpg \n inflating: train/dog.5930.jpg \n inflating: train/dog.5931.jpg \n inflating: train/dog.5932.jpg \n inflating: train/dog.5933.jpg \n inflating: train/dog.5934.jpg \n inflating: train/dog.5935.jpg \n inflating: train/dog.5936.jpg \n inflating: train/dog.5937.jpg \n inflating: train/dog.5938.jpg \n inflating: train/dog.5939.jpg \n inflating: train/dog.594.jpg \n inflating: train/dog.5940.jpg \n inflating: train/dog.5941.jpg \n inflating: train/dog.5942.jpg \n inflating: train/dog.5943.jpg \n inflating: train/dog.5944.jpg \n inflating: train/dog.5945.jpg \n inflating: train/dog.5946.jpg \n inflating: train/dog.5947.jpg \n inflating: train/dog.5948.jpg \n inflating: train/dog.5949.jpg \n inflating: train/dog.595.jpg \n inflating: train/dog.5950.jpg \n inflating: train/dog.5951.jpg \n inflating: train/dog.5952.jpg \n inflating: train/dog.5953.jpg \n inflating: train/dog.5954.jpg \n inflating: train/dog.5955.jpg \n inflating: train/dog.5956.jpg \n inflating: train/dog.5957.jpg \n inflating: train/dog.5958.jpg \n inflating: train/dog.5959.jpg \n inflating: train/dog.596.jpg \n inflating: train/dog.5960.jpg \n inflating: train/dog.5961.jpg \n inflating: train/dog.5962.jpg \n inflating: train/dog.5963.jpg \n inflating: train/dog.5964.jpg \n inflating: train/dog.5965.jpg \n inflating: train/dog.5966.jpg \n inflating: train/dog.5967.jpg \n inflating: train/dog.5968.jpg \n inflating: train/dog.5969.jpg \n inflating: train/dog.597.jpg \n inflating: train/dog.5970.jpg \n inflating: train/dog.5971.jpg \n inflating: train/dog.5972.jpg \n inflating: train/dog.5973.jpg \n inflating: train/dog.5974.jpg \n inflating: train/dog.5975.jpg \n inflating: train/dog.5976.jpg \n inflating: train/dog.5977.jpg \n inflating: train/dog.5978.jpg \n inflating: train/dog.5979.jpg \n inflating: train/dog.598.jpg \n inflating: train/dog.5980.jpg \n inflating: train/dog.5981.jpg \n inflating: train/dog.5982.jpg \n inflating: train/dog.5983.jpg \n inflating: train/dog.5984.jpg \n inflating: train/dog.5985.jpg \n inflating: train/dog.5986.jpg \n inflating: train/dog.5987.jpg \n inflating: train/dog.5988.jpg \n inflating: train/dog.5989.jpg \n inflating: train/dog.599.jpg \n inflating: train/dog.5990.jpg \n inflating: train/dog.5991.jpg \n inflating: train/dog.5992.jpg \n inflating: train/dog.5993.jpg \n inflating: train/dog.5994.jpg \n inflating: train/dog.5995.jpg \n inflating: train/dog.5996.jpg \n inflating: train/dog.5997.jpg \n inflating: train/dog.5998.jpg \n inflating: train/dog.5999.jpg \n inflating: train/dog.6.jpg \n inflating: train/dog.60.jpg \n inflating: train/dog.600.jpg \n inflating: train/dog.6000.jpg \n inflating: train/dog.6001.jpg \n inflating: train/dog.6002.jpg \n inflating: train/dog.6003.jpg \n inflating: train/dog.6004.jpg \n inflating: train/dog.6005.jpg \n inflating: train/dog.6006.jpg \n inflating: train/dog.6007.jpg \n inflating: train/dog.6008.jpg \n inflating: train/dog.6009.jpg \n inflating: train/dog.601.jpg \n inflating: train/dog.6010.jpg \n inflating: train/dog.6011.jpg \n inflating: train/dog.6012.jpg \n inflating: train/dog.6013.jpg \n inflating: train/dog.6014.jpg \n inflating: train/dog.6015.jpg \n inflating: train/dog.6016.jpg \n inflating: train/dog.6017.jpg \n inflating: train/dog.6018.jpg \n inflating: train/dog.6019.jpg \n inflating: train/dog.602.jpg \n inflating: train/dog.6020.jpg \n inflating: train/dog.6021.jpg \n inflating: train/dog.6022.jpg \n inflating: train/dog.6023.jpg \n inflating: train/dog.6024.jpg \n inflating: train/dog.6025.jpg \n inflating: train/dog.6026.jpg \n inflating: train/dog.6027.jpg \n inflating: train/dog.6028.jpg \n inflating: train/dog.6029.jpg \n inflating: train/dog.603.jpg \n inflating: train/dog.6030.jpg \n inflating: train/dog.6031.jpg \n inflating: train/dog.6032.jpg \n inflating: train/dog.6033.jpg \n inflating: train/dog.6034.jpg \n inflating: train/dog.6035.jpg \n inflating: train/dog.6036.jpg \n inflating: train/dog.6037.jpg \n inflating: train/dog.6038.jpg \n inflating: train/dog.6039.jpg \n inflating: train/dog.604.jpg \n inflating: train/dog.6040.jpg \n inflating: train/dog.6041.jpg \n inflating: train/dog.6042.jpg \n inflating: train/dog.6043.jpg \n inflating: train/dog.6044.jpg \n inflating: train/dog.6045.jpg \n inflating: train/dog.6046.jpg \n inflating: train/dog.6047.jpg \n inflating: train/dog.6048.jpg \n inflating: train/dog.6049.jpg \n inflating: train/dog.605.jpg \n inflating: train/dog.6050.jpg \n inflating: train/dog.6051.jpg \n inflating: train/dog.6052.jpg \n inflating: train/dog.6053.jpg \n inflating: train/dog.6054.jpg \n inflating: train/dog.6055.jpg \n inflating: train/dog.6056.jpg \n inflating: train/dog.6057.jpg \n inflating: train/dog.6058.jpg \n inflating: train/dog.6059.jpg \n inflating: train/dog.606.jpg \n inflating: train/dog.6060.jpg \n inflating: train/dog.6061.jpg \n inflating: train/dog.6062.jpg \n inflating: train/dog.6063.jpg \n inflating: train/dog.6064.jpg \n inflating: train/dog.6065.jpg \n inflating: train/dog.6066.jpg \n inflating: train/dog.6067.jpg \n inflating: train/dog.6068.jpg \n inflating: train/dog.6069.jpg \n inflating: train/dog.607.jpg \n inflating: train/dog.6070.jpg \n inflating: train/dog.6071.jpg \n inflating: train/dog.6072.jpg \n inflating: train/dog.6073.jpg \n inflating: train/dog.6074.jpg \n inflating: train/dog.6075.jpg \n inflating: train/dog.6076.jpg \n inflating: train/dog.6077.jpg \n inflating: train/dog.6078.jpg \n inflating: train/dog.6079.jpg \n inflating: train/dog.608.jpg \n inflating: train/dog.6080.jpg \n inflating: train/dog.6081.jpg \n inflating: train/dog.6082.jpg \n inflating: train/dog.6083.jpg \n inflating: train/dog.6084.jpg \n inflating: train/dog.6085.jpg \n inflating: train/dog.6086.jpg \n inflating: train/dog.6087.jpg \n inflating: train/dog.6088.jpg \n inflating: train/dog.6089.jpg \n inflating: train/dog.609.jpg \n inflating: train/dog.6090.jpg \n inflating: train/dog.6091.jpg \n inflating: train/dog.6092.jpg \n inflating: train/dog.6093.jpg \n inflating: train/dog.6094.jpg \n inflating: train/dog.6095.jpg \n inflating: train/dog.6096.jpg \n inflating: train/dog.6097.jpg \n inflating: train/dog.6098.jpg \n inflating: train/dog.6099.jpg \n inflating: train/dog.61.jpg \n inflating: train/dog.610.jpg \n inflating: train/dog.6100.jpg \n inflating: train/dog.6101.jpg \n inflating: train/dog.6102.jpg \n inflating: train/dog.6103.jpg \n inflating: train/dog.6104.jpg \n inflating: train/dog.6105.jpg \n inflating: train/dog.6106.jpg \n inflating: train/dog.6107.jpg \n inflating: train/dog.6108.jpg \n inflating: train/dog.6109.jpg \n inflating: train/dog.611.jpg \n inflating: train/dog.6110.jpg \n inflating: train/dog.6111.jpg \n inflating: train/dog.6112.jpg \n inflating: train/dog.6113.jpg \n inflating: train/dog.6114.jpg \n inflating: train/dog.6115.jpg \n inflating: train/dog.6116.jpg \n inflating: train/dog.6117.jpg \n inflating: train/dog.6118.jpg \n inflating: train/dog.6119.jpg \n inflating: train/dog.612.jpg \n inflating: train/dog.6120.jpg \n inflating: train/dog.6121.jpg \n inflating: train/dog.6122.jpg \n inflating: train/dog.6123.jpg \n inflating: train/dog.6124.jpg \n inflating: train/dog.6125.jpg \n inflating: train/dog.6126.jpg \n inflating: train/dog.6127.jpg \n inflating: train/dog.6128.jpg \n inflating: train/dog.6129.jpg \n inflating: train/dog.613.jpg \n inflating: train/dog.6130.jpg \n inflating: train/dog.6131.jpg \n inflating: train/dog.6132.jpg \n inflating: train/dog.6133.jpg \n inflating: train/dog.6134.jpg \n inflating: train/dog.6135.jpg \n inflating: train/dog.6136.jpg \n inflating: train/dog.6137.jpg \n inflating: train/dog.6138.jpg \n inflating: train/dog.6139.jpg \n inflating: train/dog.614.jpg \n inflating: train/dog.6140.jpg \n inflating: train/dog.6141.jpg \n inflating: train/dog.6142.jpg \n inflating: train/dog.6143.jpg \n inflating: train/dog.6144.jpg \n inflating: train/dog.6145.jpg \n inflating: train/dog.6146.jpg \n inflating: train/dog.6147.jpg \n inflating: train/dog.6148.jpg \n inflating: train/dog.6149.jpg \n inflating: train/dog.615.jpg \n inflating: train/dog.6150.jpg \n inflating: train/dog.6151.jpg \n inflating: train/dog.6152.jpg \n inflating: train/dog.6153.jpg \n inflating: train/dog.6154.jpg \n inflating: train/dog.6155.jpg \n inflating: train/dog.6156.jpg \n inflating: train/dog.6157.jpg \n inflating: train/dog.6158.jpg \n inflating: train/dog.6159.jpg \n inflating: train/dog.616.jpg \n inflating: train/dog.6160.jpg \n inflating: train/dog.6161.jpg \n inflating: train/dog.6162.jpg \n inflating: train/dog.6163.jpg \n inflating: train/dog.6164.jpg \n inflating: train/dog.6165.jpg \n inflating: train/dog.6166.jpg \n inflating: train/dog.6167.jpg \n inflating: train/dog.6168.jpg \n inflating: train/dog.6169.jpg \n inflating: train/dog.617.jpg \n inflating: train/dog.6170.jpg \n inflating: train/dog.6171.jpg \n inflating: train/dog.6172.jpg \n inflating: train/dog.6173.jpg \n inflating: train/dog.6174.jpg \n inflating: train/dog.6175.jpg \n inflating: train/dog.6176.jpg \n inflating: train/dog.6177.jpg \n inflating: train/dog.6178.jpg \n inflating: train/dog.6179.jpg \n inflating: train/dog.618.jpg \n inflating: train/dog.6180.jpg \n inflating: train/dog.6181.jpg \n inflating: train/dog.6182.jpg \n inflating: train/dog.6183.jpg \n inflating: train/dog.6184.jpg \n inflating: train/dog.6185.jpg \n inflating: train/dog.6186.jpg \n inflating: train/dog.6187.jpg \n inflating: train/dog.6188.jpg \n inflating: train/dog.6189.jpg \n inflating: train/dog.619.jpg \n inflating: train/dog.6190.jpg \n inflating: train/dog.6191.jpg \n inflating: train/dog.6192.jpg \n inflating: train/dog.6193.jpg \n inflating: train/dog.6194.jpg \n inflating: train/dog.6195.jpg \n inflating: train/dog.6196.jpg \n inflating: train/dog.6197.jpg \n inflating: train/dog.6198.jpg \n inflating: train/dog.6199.jpg \n inflating: train/dog.62.jpg \n inflating: train/dog.620.jpg \n inflating: train/dog.6200.jpg \n inflating: train/dog.6201.jpg \n inflating: train/dog.6202.jpg \n inflating: train/dog.6203.jpg \n inflating: train/dog.6204.jpg \n inflating: train/dog.6205.jpg \n inflating: train/dog.6206.jpg \n inflating: train/dog.6207.jpg \n inflating: train/dog.6208.jpg \n inflating: train/dog.6209.jpg \n inflating: train/dog.621.jpg \n inflating: train/dog.6210.jpg \n inflating: train/dog.6211.jpg \n inflating: train/dog.6212.jpg \n inflating: train/dog.6213.jpg \n inflating: train/dog.6214.jpg \n inflating: train/dog.6215.jpg \n inflating: train/dog.6216.jpg \n inflating: train/dog.6217.jpg \n inflating: train/dog.6218.jpg \n inflating: train/dog.6219.jpg \n inflating: train/dog.622.jpg \n inflating: train/dog.6220.jpg \n inflating: train/dog.6221.jpg \n inflating: train/dog.6222.jpg \n inflating: train/dog.6223.jpg \n inflating: train/dog.6224.jpg \n inflating: train/dog.6225.jpg \n inflating: train/dog.6226.jpg \n inflating: train/dog.6227.jpg \n inflating: train/dog.6228.jpg \n inflating: train/dog.6229.jpg \n inflating: train/dog.623.jpg \n inflating: train/dog.6230.jpg \n inflating: train/dog.6231.jpg \n inflating: train/dog.6232.jpg \n inflating: train/dog.6233.jpg \n inflating: train/dog.6234.jpg \n inflating: train/dog.6235.jpg \n inflating: train/dog.6236.jpg \n inflating: train/dog.6237.jpg \n inflating: train/dog.6238.jpg \n inflating: train/dog.6239.jpg \n inflating: train/dog.624.jpg \n inflating: train/dog.6240.jpg \n inflating: train/dog.6241.jpg \n inflating: train/dog.6242.jpg \n inflating: train/dog.6243.jpg \n inflating: train/dog.6244.jpg \n inflating: train/dog.6245.jpg \n inflating: train/dog.6246.jpg \n inflating: train/dog.6247.jpg \n inflating: train/dog.6248.jpg \n inflating: train/dog.6249.jpg \n inflating: train/dog.625.jpg \n inflating: train/dog.6250.jpg \n inflating: train/dog.6251.jpg \n inflating: train/dog.6252.jpg \n inflating: train/dog.6253.jpg \n inflating: train/dog.6254.jpg \n inflating: train/dog.6255.jpg \n inflating: train/dog.6256.jpg \n inflating: train/dog.6257.jpg \n inflating: train/dog.6258.jpg \n inflating: train/dog.6259.jpg \n inflating: train/dog.626.jpg \n inflating: train/dog.6260.jpg \n inflating: train/dog.6261.jpg \n inflating: train/dog.6262.jpg \n inflating: train/dog.6263.jpg \n inflating: train/dog.6264.jpg \n inflating: train/dog.6265.jpg \n inflating: train/dog.6266.jpg \n inflating: train/dog.6267.jpg \n inflating: train/dog.6268.jpg \n inflating: train/dog.6269.jpg \n inflating: train/dog.627.jpg \n inflating: train/dog.6270.jpg \n inflating: train/dog.6271.jpg \n inflating: train/dog.6272.jpg \n inflating: train/dog.6273.jpg \n inflating: train/dog.6274.jpg \n inflating: train/dog.6275.jpg \n inflating: train/dog.6276.jpg \n inflating: train/dog.6277.jpg \n inflating: train/dog.6278.jpg \n inflating: train/dog.6279.jpg \n inflating: train/dog.628.jpg \n inflating: train/dog.6280.jpg \n inflating: train/dog.6281.jpg \n inflating: train/dog.6282.jpg \n inflating: train/dog.6283.jpg \n inflating: train/dog.6284.jpg \n inflating: train/dog.6285.jpg \n inflating: train/dog.6286.jpg \n inflating: train/dog.6287.jpg \n inflating: train/dog.6288.jpg \n inflating: train/dog.6289.jpg \n inflating: train/dog.629.jpg \n inflating: train/dog.6290.jpg \n inflating: train/dog.6291.jpg \n inflating: train/dog.6292.jpg \n inflating: train/dog.6293.jpg \n inflating: train/dog.6294.jpg \n inflating: train/dog.6295.jpg \n inflating: train/dog.6296.jpg \n inflating: train/dog.6297.jpg \n inflating: train/dog.6298.jpg \n inflating: train/dog.6299.jpg \n inflating: train/dog.63.jpg \n inflating: train/dog.630.jpg \n inflating: train/dog.6300.jpg \n inflating: train/dog.6301.jpg \n inflating: train/dog.6302.jpg \n inflating: train/dog.6303.jpg \n inflating: train/dog.6304.jpg \n inflating: train/dog.6305.jpg \n inflating: train/dog.6306.jpg \n inflating: train/dog.6307.jpg \n inflating: train/dog.6308.jpg \n inflating: train/dog.6309.jpg \n inflating: train/dog.631.jpg \n inflating: train/dog.6310.jpg \n inflating: train/dog.6311.jpg \n inflating: train/dog.6312.jpg \n inflating: train/dog.6313.jpg \n inflating: train/dog.6314.jpg \n inflating: train/dog.6315.jpg \n inflating: train/dog.6316.jpg \n inflating: train/dog.6317.jpg \n inflating: train/dog.6318.jpg \n inflating: train/dog.6319.jpg \n inflating: train/dog.632.jpg \n inflating: train/dog.6320.jpg \n inflating: train/dog.6321.jpg \n inflating: train/dog.6322.jpg \n inflating: train/dog.6323.jpg \n inflating: train/dog.6324.jpg \n inflating: train/dog.6325.jpg \n inflating: train/dog.6326.jpg \n inflating: train/dog.6327.jpg \n inflating: train/dog.6328.jpg \n inflating: train/dog.6329.jpg \n inflating: train/dog.633.jpg \n inflating: train/dog.6330.jpg \n inflating: train/dog.6331.jpg \n inflating: train/dog.6332.jpg \n inflating: train/dog.6333.jpg \n inflating: train/dog.6334.jpg \n inflating: train/dog.6335.jpg \n inflating: train/dog.6336.jpg \n inflating: train/dog.6337.jpg \n inflating: train/dog.6338.jpg \n inflating: train/dog.6339.jpg \n inflating: train/dog.634.jpg \n inflating: train/dog.6340.jpg \n inflating: train/dog.6341.jpg \n inflating: train/dog.6342.jpg \n inflating: train/dog.6343.jpg \n inflating: train/dog.6344.jpg \n inflating: train/dog.6345.jpg \n inflating: train/dog.6346.jpg \n inflating: train/dog.6347.jpg \n inflating: train/dog.6348.jpg \n inflating: train/dog.6349.jpg \n inflating: train/dog.635.jpg \n inflating: train/dog.6350.jpg \n inflating: train/dog.6351.jpg \n inflating: train/dog.6352.jpg \n inflating: train/dog.6353.jpg \n inflating: train/dog.6354.jpg \n inflating: train/dog.6355.jpg \n inflating: train/dog.6356.jpg \n inflating: train/dog.6357.jpg \n inflating: train/dog.6358.jpg \n inflating: train/dog.6359.jpg \n inflating: train/dog.636.jpg \n inflating: train/dog.6360.jpg \n inflating: train/dog.6361.jpg \n inflating: train/dog.6362.jpg \n inflating: train/dog.6363.jpg \n inflating: train/dog.6364.jpg \n inflating: train/dog.6365.jpg \n inflating: train/dog.6366.jpg \n inflating: train/dog.6367.jpg \n inflating: train/dog.6368.jpg \n inflating: train/dog.6369.jpg \n inflating: train/dog.637.jpg \n inflating: train/dog.6370.jpg \n inflating: train/dog.6371.jpg \n inflating: train/dog.6372.jpg \n inflating: train/dog.6373.jpg \n inflating: train/dog.6374.jpg \n inflating: train/dog.6375.jpg \n inflating: train/dog.6376.jpg \n inflating: train/dog.6377.jpg \n inflating: train/dog.6378.jpg \n inflating: train/dog.6379.jpg \n inflating: train/dog.638.jpg \n inflating: train/dog.6380.jpg \n inflating: train/dog.6381.jpg \n inflating: train/dog.6382.jpg \n inflating: train/dog.6383.jpg \n inflating: train/dog.6384.jpg \n inflating: train/dog.6385.jpg \n inflating: train/dog.6386.jpg \n inflating: train/dog.6387.jpg \n inflating: train/dog.6388.jpg \n inflating: train/dog.6389.jpg \n inflating: train/dog.639.jpg \n inflating: train/dog.6390.jpg \n inflating: train/dog.6391.jpg \n inflating: train/dog.6392.jpg \n inflating: train/dog.6393.jpg \n inflating: train/dog.6394.jpg \n inflating: train/dog.6395.jpg \n inflating: train/dog.6396.jpg \n inflating: train/dog.6397.jpg \n inflating: train/dog.6398.jpg \n inflating: train/dog.6399.jpg \n inflating: train/dog.64.jpg \n inflating: train/dog.640.jpg \n inflating: train/dog.6400.jpg \n inflating: train/dog.6401.jpg \n inflating: train/dog.6402.jpg \n inflating: train/dog.6403.jpg \n inflating: train/dog.6404.jpg \n inflating: train/dog.6405.jpg \n inflating: train/dog.6406.jpg \n inflating: train/dog.6407.jpg \n inflating: train/dog.6408.jpg \n inflating: train/dog.6409.jpg \n inflating: train/dog.641.jpg \n inflating: train/dog.6410.jpg \n inflating: train/dog.6411.jpg \n inflating: train/dog.6412.jpg \n inflating: train/dog.6413.jpg \n inflating: train/dog.6414.jpg \n inflating: train/dog.6415.jpg \n inflating: train/dog.6416.jpg \n inflating: train/dog.6417.jpg \n inflating: train/dog.6418.jpg \n inflating: train/dog.6419.jpg \n inflating: train/dog.642.jpg \n inflating: train/dog.6420.jpg \n inflating: train/dog.6421.jpg \n inflating: train/dog.6422.jpg \n inflating: train/dog.6423.jpg \n inflating: train/dog.6424.jpg \n inflating: train/dog.6425.jpg \n inflating: train/dog.6426.jpg \n inflating: train/dog.6427.jpg \n inflating: train/dog.6428.jpg \n inflating: train/dog.6429.jpg \n inflating: train/dog.643.jpg \n inflating: train/dog.6430.jpg \n inflating: train/dog.6431.jpg \n inflating: train/dog.6432.jpg \n inflating: train/dog.6433.jpg \n inflating: train/dog.6434.jpg \n inflating: train/dog.6435.jpg \n inflating: train/dog.6436.jpg \n inflating: train/dog.6437.jpg \n inflating: train/dog.6438.jpg \n inflating: train/dog.6439.jpg \n inflating: train/dog.644.jpg \n inflating: train/dog.6440.jpg \n inflating: train/dog.6441.jpg \n inflating: train/dog.6442.jpg \n inflating: train/dog.6443.jpg \n inflating: train/dog.6444.jpg \n inflating: train/dog.6445.jpg \n inflating: train/dog.6446.jpg \n inflating: train/dog.6447.jpg \n inflating: train/dog.6448.jpg \n inflating: train/dog.6449.jpg \n inflating: train/dog.645.jpg \n inflating: train/dog.6450.jpg \n inflating: train/dog.6451.jpg \n inflating: train/dog.6452.jpg \n inflating: train/dog.6453.jpg \n inflating: train/dog.6454.jpg \n inflating: train/dog.6455.jpg \n inflating: train/dog.6456.jpg \n inflating: train/dog.6457.jpg \n inflating: train/dog.6458.jpg \n inflating: train/dog.6459.jpg \n inflating: train/dog.646.jpg \n inflating: train/dog.6460.jpg \n inflating: train/dog.6461.jpg \n inflating: train/dog.6462.jpg \n inflating: train/dog.6463.jpg \n inflating: train/dog.6464.jpg \n inflating: train/dog.6465.jpg \n inflating: train/dog.6466.jpg \n inflating: train/dog.6467.jpg \n inflating: train/dog.6468.jpg \n inflating: train/dog.6469.jpg \n inflating: train/dog.647.jpg \n inflating: train/dog.6470.jpg \n inflating: train/dog.6471.jpg \n inflating: train/dog.6472.jpg \n inflating: train/dog.6473.jpg \n inflating: train/dog.6474.jpg \n inflating: train/dog.6475.jpg \n inflating: train/dog.6476.jpg \n inflating: train/dog.6477.jpg \n inflating: train/dog.6478.jpg \n inflating: train/dog.6479.jpg \n inflating: train/dog.648.jpg \n inflating: train/dog.6480.jpg \n inflating: train/dog.6481.jpg \n inflating: train/dog.6482.jpg \n inflating: train/dog.6483.jpg \n inflating: train/dog.6484.jpg \n inflating: train/dog.6485.jpg \n inflating: train/dog.6486.jpg \n inflating: train/dog.6487.jpg \n inflating: train/dog.6488.jpg \n inflating: train/dog.6489.jpg \n inflating: train/dog.649.jpg \n inflating: train/dog.6490.jpg \n inflating: train/dog.6491.jpg \n inflating: train/dog.6492.jpg \n inflating: train/dog.6493.jpg \n inflating: train/dog.6494.jpg \n inflating: train/dog.6495.jpg \n inflating: train/dog.6496.jpg \n inflating: train/dog.6497.jpg \n inflating: train/dog.6498.jpg \n inflating: train/dog.6499.jpg \n inflating: train/dog.65.jpg \n inflating: train/dog.650.jpg \n inflating: train/dog.6500.jpg \n inflating: train/dog.6501.jpg \n inflating: train/dog.6502.jpg \n inflating: train/dog.6503.jpg \n inflating: train/dog.6504.jpg \n inflating: train/dog.6505.jpg \n inflating: train/dog.6506.jpg \n inflating: train/dog.6507.jpg \n inflating: train/dog.6508.jpg \n inflating: train/dog.6509.jpg \n inflating: train/dog.651.jpg \n inflating: train/dog.6510.jpg \n inflating: train/dog.6511.jpg \n inflating: train/dog.6512.jpg \n inflating: train/dog.6513.jpg \n inflating: train/dog.6514.jpg \n inflating: train/dog.6515.jpg \n inflating: train/dog.6516.jpg \n inflating: train/dog.6517.jpg \n inflating: train/dog.6518.jpg \n inflating: train/dog.6519.jpg \n inflating: train/dog.652.jpg \n inflating: train/dog.6520.jpg \n inflating: train/dog.6521.jpg \n inflating: train/dog.6522.jpg \n inflating: train/dog.6523.jpg \n inflating: train/dog.6524.jpg \n inflating: train/dog.6525.jpg \n inflating: train/dog.6526.jpg \n inflating: train/dog.6527.jpg \n inflating: train/dog.6528.jpg \n inflating: train/dog.6529.jpg \n inflating: train/dog.653.jpg \n inflating: train/dog.6530.jpg \n inflating: train/dog.6531.jpg \n inflating: train/dog.6532.jpg \n inflating: train/dog.6533.jpg \n inflating: train/dog.6534.jpg \n inflating: train/dog.6535.jpg \n inflating: train/dog.6536.jpg \n inflating: train/dog.6537.jpg \n inflating: train/dog.6538.jpg \n inflating: train/dog.6539.jpg \n inflating: train/dog.654.jpg \n inflating: train/dog.6540.jpg \n inflating: train/dog.6541.jpg \n inflating: train/dog.6542.jpg \n inflating: train/dog.6543.jpg \n inflating: train/dog.6544.jpg \n inflating: train/dog.6545.jpg \n inflating: train/dog.6546.jpg \n inflating: train/dog.6547.jpg \n inflating: train/dog.6548.jpg \n inflating: train/dog.6549.jpg \n inflating: train/dog.655.jpg \n inflating: train/dog.6550.jpg \n inflating: train/dog.6551.jpg \n inflating: train/dog.6552.jpg \n inflating: train/dog.6553.jpg \n inflating: train/dog.6554.jpg \n inflating: train/dog.6555.jpg \n inflating: train/dog.6556.jpg \n inflating: train/dog.6557.jpg \n inflating: train/dog.6558.jpg \n inflating: train/dog.6559.jpg \n inflating: train/dog.656.jpg \n inflating: train/dog.6560.jpg \n inflating: train/dog.6561.jpg \n inflating: train/dog.6562.jpg \n inflating: train/dog.6563.jpg \n inflating: train/dog.6564.jpg \n inflating: train/dog.6565.jpg \n inflating: train/dog.6566.jpg \n inflating: train/dog.6567.jpg \n inflating: train/dog.6568.jpg \n inflating: train/dog.6569.jpg \n inflating: train/dog.657.jpg \n inflating: train/dog.6570.jpg \n inflating: train/dog.6571.jpg \n inflating: train/dog.6572.jpg \n inflating: train/dog.6573.jpg \n inflating: train/dog.6574.jpg \n inflating: train/dog.6575.jpg \n inflating: train/dog.6576.jpg \n inflating: train/dog.6577.jpg \n inflating: train/dog.6578.jpg \n inflating: train/dog.6579.jpg \n inflating: train/dog.658.jpg \n inflating: train/dog.6580.jpg \n inflating: train/dog.6581.jpg \n inflating: train/dog.6582.jpg \n inflating: train/dog.6583.jpg \n inflating: train/dog.6584.jpg \n inflating: train/dog.6585.jpg \n inflating: train/dog.6586.jpg \n inflating: train/dog.6587.jpg \n inflating: train/dog.6588.jpg \n inflating: train/dog.6589.jpg \n inflating: train/dog.659.jpg \n inflating: train/dog.6590.jpg \n inflating: train/dog.6591.jpg \n inflating: train/dog.6592.jpg \n inflating: train/dog.6593.jpg \n inflating: train/dog.6594.jpg \n inflating: train/dog.6595.jpg \n inflating: train/dog.6596.jpg \n inflating: train/dog.6597.jpg \n inflating: train/dog.6598.jpg \n inflating: train/dog.6599.jpg \n inflating: train/dog.66.jpg \n inflating: train/dog.660.jpg \n inflating: train/dog.6600.jpg \n inflating: train/dog.6601.jpg \n inflating: train/dog.6602.jpg \n inflating: train/dog.6603.jpg \n inflating: train/dog.6604.jpg \n inflating: train/dog.6605.jpg \n inflating: train/dog.6606.jpg \n inflating: train/dog.6607.jpg \n inflating: train/dog.6608.jpg \n inflating: train/dog.6609.jpg \n inflating: train/dog.661.jpg \n inflating: train/dog.6610.jpg \n inflating: train/dog.6611.jpg \n inflating: train/dog.6612.jpg \n inflating: train/dog.6613.jpg \n inflating: train/dog.6614.jpg \n inflating: train/dog.6615.jpg \n inflating: train/dog.6616.jpg \n inflating: train/dog.6617.jpg \n inflating: train/dog.6618.jpg \n inflating: train/dog.6619.jpg \n inflating: train/dog.662.jpg \n inflating: train/dog.6620.jpg \n inflating: train/dog.6621.jpg \n inflating: train/dog.6622.jpg \n inflating: train/dog.6623.jpg \n inflating: train/dog.6624.jpg \n inflating: train/dog.6625.jpg \n inflating: train/dog.6626.jpg \n inflating: train/dog.6627.jpg \n inflating: train/dog.6628.jpg \n inflating: train/dog.6629.jpg \n inflating: train/dog.663.jpg \n inflating: train/dog.6630.jpg \n inflating: train/dog.6631.jpg \n inflating: train/dog.6632.jpg \n inflating: train/dog.6633.jpg \n inflating: train/dog.6634.jpg \n inflating: train/dog.6635.jpg \n inflating: train/dog.6636.jpg \n inflating: train/dog.6637.jpg \n inflating: train/dog.6638.jpg \n inflating: train/dog.6639.jpg \n inflating: train/dog.664.jpg \n inflating: train/dog.6640.jpg \n inflating: train/dog.6641.jpg \n inflating: train/dog.6642.jpg \n inflating: train/dog.6643.jpg \n inflating: train/dog.6644.jpg \n inflating: train/dog.6645.jpg \n inflating: train/dog.6646.jpg \n inflating: train/dog.6647.jpg \n inflating: train/dog.6648.jpg \n inflating: train/dog.6649.jpg \n inflating: train/dog.665.jpg \n inflating: train/dog.6650.jpg \n inflating: train/dog.6651.jpg \n inflating: train/dog.6652.jpg \n inflating: train/dog.6653.jpg \n inflating: train/dog.6654.jpg \n inflating: train/dog.6655.jpg \n inflating: train/dog.6656.jpg \n inflating: train/dog.6657.jpg \n inflating: train/dog.6658.jpg \n inflating: train/dog.6659.jpg \n inflating: train/dog.666.jpg \n inflating: train/dog.6660.jpg \n inflating: train/dog.6661.jpg \n inflating: train/dog.6662.jpg \n inflating: train/dog.6663.jpg \n inflating: train/dog.6664.jpg \n inflating: train/dog.6665.jpg \n inflating: train/dog.6666.jpg \n inflating: train/dog.6667.jpg \n inflating: train/dog.6668.jpg \n inflating: train/dog.6669.jpg \n inflating: train/dog.667.jpg \n inflating: train/dog.6670.jpg \n inflating: train/dog.6671.jpg \n inflating: train/dog.6672.jpg \n inflating: train/dog.6673.jpg \n inflating: train/dog.6674.jpg \n inflating: train/dog.6675.jpg \n inflating: train/dog.6676.jpg \n inflating: train/dog.6677.jpg \n inflating: train/dog.6678.jpg \n inflating: train/dog.6679.jpg \n inflating: train/dog.668.jpg \n inflating: train/dog.6680.jpg \n inflating: train/dog.6681.jpg \n inflating: train/dog.6682.jpg \n inflating: train/dog.6683.jpg \n inflating: train/dog.6684.jpg \n inflating: train/dog.6685.jpg \n inflating: train/dog.6686.jpg \n inflating: train/dog.6687.jpg \n inflating: train/dog.6688.jpg \n inflating: train/dog.6689.jpg \n inflating: train/dog.669.jpg \n inflating: train/dog.6690.jpg \n inflating: train/dog.6691.jpg \n inflating: train/dog.6692.jpg \n inflating: train/dog.6693.jpg \n inflating: train/dog.6694.jpg \n inflating: train/dog.6695.jpg \n inflating: train/dog.6696.jpg \n inflating: train/dog.6697.jpg \n inflating: train/dog.6698.jpg \n inflating: train/dog.6699.jpg \n inflating: train/dog.67.jpg \n inflating: train/dog.670.jpg \n inflating: train/dog.6700.jpg \n inflating: train/dog.6701.jpg \n inflating: train/dog.6702.jpg \n inflating: train/dog.6703.jpg \n inflating: train/dog.6704.jpg \n inflating: train/dog.6705.jpg \n inflating: train/dog.6706.jpg \n inflating: train/dog.6707.jpg \n inflating: train/dog.6708.jpg \n inflating: train/dog.6709.jpg \n inflating: train/dog.671.jpg \n inflating: train/dog.6710.jpg \n inflating: train/dog.6711.jpg \n inflating: train/dog.6712.jpg \n inflating: train/dog.6713.jpg \n inflating: train/dog.6714.jpg \n inflating: train/dog.6715.jpg \n inflating: train/dog.6716.jpg \n inflating: train/dog.6717.jpg \n inflating: train/dog.6718.jpg \n inflating: train/dog.6719.jpg \n inflating: train/dog.672.jpg \n inflating: train/dog.6720.jpg \n inflating: train/dog.6721.jpg \n inflating: train/dog.6722.jpg \n inflating: train/dog.6723.jpg \n inflating: train/dog.6724.jpg \n inflating: train/dog.6725.jpg \n inflating: train/dog.6726.jpg \n inflating: train/dog.6727.jpg \n inflating: train/dog.6728.jpg \n inflating: train/dog.6729.jpg \n inflating: train/dog.673.jpg \n inflating: train/dog.6730.jpg \n inflating: train/dog.6731.jpg \n inflating: train/dog.6732.jpg \n inflating: train/dog.6733.jpg \n inflating: train/dog.6734.jpg \n inflating: train/dog.6735.jpg \n inflating: train/dog.6736.jpg \n inflating: train/dog.6737.jpg \n inflating: train/dog.6738.jpg \n inflating: train/dog.6739.jpg \n inflating: train/dog.674.jpg \n inflating: train/dog.6740.jpg \n inflating: train/dog.6741.jpg \n inflating: train/dog.6742.jpg \n inflating: train/dog.6743.jpg \n inflating: train/dog.6744.jpg \n inflating: train/dog.6745.jpg \n inflating: train/dog.6746.jpg \n inflating: train/dog.6747.jpg \n inflating: train/dog.6748.jpg \n inflating: train/dog.6749.jpg \n inflating: train/dog.675.jpg \n inflating: train/dog.6750.jpg \n inflating: train/dog.6751.jpg \n inflating: train/dog.6752.jpg \n inflating: train/dog.6753.jpg \n inflating: train/dog.6754.jpg \n inflating: train/dog.6755.jpg \n inflating: train/dog.6756.jpg \n inflating: train/dog.6757.jpg \n inflating: train/dog.6758.jpg \n inflating: train/dog.6759.jpg \n inflating: train/dog.676.jpg \n inflating: train/dog.6760.jpg \n inflating: train/dog.6761.jpg \n inflating: train/dog.6762.jpg \n inflating: train/dog.6763.jpg \n inflating: train/dog.6764.jpg \n inflating: train/dog.6765.jpg \n inflating: train/dog.6766.jpg \n inflating: train/dog.6767.jpg \n inflating: train/dog.6768.jpg \n inflating: train/dog.6769.jpg \n inflating: train/dog.677.jpg \n inflating: train/dog.6770.jpg \n inflating: train/dog.6771.jpg \n inflating: train/dog.6772.jpg \n inflating: train/dog.6773.jpg \n inflating: train/dog.6774.jpg \n inflating: train/dog.6775.jpg \n inflating: train/dog.6776.jpg \n inflating: train/dog.6777.jpg \n inflating: train/dog.6778.jpg \n inflating: train/dog.6779.jpg \n inflating: train/dog.678.jpg \n inflating: train/dog.6780.jpg \n inflating: train/dog.6781.jpg \n inflating: train/dog.6782.jpg \n inflating: train/dog.6783.jpg \n inflating: train/dog.6784.jpg \n inflating: train/dog.6785.jpg \n inflating: train/dog.6786.jpg \n inflating: train/dog.6787.jpg \n inflating: train/dog.6788.jpg \n inflating: train/dog.6789.jpg \n inflating: train/dog.679.jpg \n inflating: train/dog.6790.jpg \n inflating: train/dog.6791.jpg \n inflating: train/dog.6792.jpg \n inflating: train/dog.6793.jpg \n inflating: train/dog.6794.jpg \n inflating: train/dog.6795.jpg \n inflating: train/dog.6796.jpg \n inflating: train/dog.6797.jpg \n inflating: train/dog.6798.jpg \n inflating: train/dog.6799.jpg \n inflating: train/dog.68.jpg \n inflating: train/dog.680.jpg \n inflating: train/dog.6800.jpg \n inflating: train/dog.6801.jpg \n inflating: train/dog.6802.jpg \n inflating: train/dog.6803.jpg \n inflating: train/dog.6804.jpg \n inflating: train/dog.6805.jpg \n inflating: train/dog.6806.jpg \n inflating: train/dog.6807.jpg \n inflating: train/dog.6808.jpg \n inflating: train/dog.6809.jpg \n inflating: train/dog.681.jpg \n inflating: train/dog.6810.jpg \n inflating: train/dog.6811.jpg \n inflating: train/dog.6812.jpg \n inflating: train/dog.6813.jpg \n inflating: train/dog.6814.jpg \n inflating: train/dog.6815.jpg \n inflating: train/dog.6816.jpg \n inflating: train/dog.6817.jpg \n inflating: train/dog.6818.jpg \n inflating: train/dog.6819.jpg \n inflating: train/dog.682.jpg \n inflating: train/dog.6820.jpg \n inflating: train/dog.6821.jpg \n inflating: train/dog.6822.jpg \n inflating: train/dog.6823.jpg \n inflating: train/dog.6824.jpg \n inflating: train/dog.6825.jpg \n inflating: train/dog.6826.jpg \n inflating: train/dog.6827.jpg \n inflating: train/dog.6828.jpg \n inflating: train/dog.6829.jpg \n inflating: train/dog.683.jpg \n inflating: train/dog.6830.jpg \n inflating: train/dog.6831.jpg \n inflating: train/dog.6832.jpg \n inflating: train/dog.6833.jpg \n inflating: train/dog.6834.jpg \n inflating: train/dog.6835.jpg \n inflating: train/dog.6836.jpg \n inflating: train/dog.6837.jpg \n inflating: train/dog.6838.jpg \n inflating: train/dog.6839.jpg \n inflating: train/dog.684.jpg \n inflating: train/dog.6840.jpg \n inflating: train/dog.6841.jpg \n inflating: train/dog.6842.jpg \n inflating: train/dog.6843.jpg \n inflating: train/dog.6844.jpg \n inflating: train/dog.6845.jpg \n inflating: train/dog.6846.jpg \n inflating: train/dog.6847.jpg \n inflating: train/dog.6848.jpg \n inflating: train/dog.6849.jpg \n inflating: train/dog.685.jpg \n inflating: train/dog.6850.jpg \n inflating: train/dog.6851.jpg \n inflating: train/dog.6852.jpg \n inflating: train/dog.6853.jpg \n inflating: train/dog.6854.jpg \n inflating: train/dog.6855.jpg \n inflating: train/dog.6856.jpg \n inflating: train/dog.6857.jpg \n inflating: train/dog.6858.jpg \n inflating: train/dog.6859.jpg \n inflating: train/dog.686.jpg \n inflating: train/dog.6860.jpg \n inflating: train/dog.6861.jpg \n inflating: train/dog.6862.jpg \n inflating: train/dog.6863.jpg \n inflating: train/dog.6864.jpg \n inflating: train/dog.6865.jpg \n inflating: train/dog.6866.jpg \n inflating: train/dog.6867.jpg \n inflating: train/dog.6868.jpg \n inflating: train/dog.6869.jpg \n inflating: train/dog.687.jpg \n inflating: train/dog.6870.jpg \n inflating: train/dog.6871.jpg \n inflating: train/dog.6872.jpg \n inflating: train/dog.6873.jpg \n inflating: train/dog.6874.jpg \n inflating: train/dog.6875.jpg \n inflating: train/dog.6876.jpg \n inflating: train/dog.6877.jpg \n inflating: train/dog.6878.jpg \n inflating: train/dog.6879.jpg \n inflating: train/dog.688.jpg \n inflating: train/dog.6880.jpg \n inflating: train/dog.6881.jpg \n inflating: train/dog.6882.jpg \n inflating: train/dog.6883.jpg \n inflating: train/dog.6884.jpg \n inflating: train/dog.6885.jpg \n inflating: train/dog.6886.jpg \n inflating: train/dog.6887.jpg \n inflating: train/dog.6888.jpg \n inflating: train/dog.6889.jpg \n inflating: train/dog.689.jpg \n inflating: train/dog.6890.jpg \n inflating: train/dog.6891.jpg \n inflating: train/dog.6892.jpg \n inflating: train/dog.6893.jpg \n inflating: train/dog.6894.jpg \n inflating: train/dog.6895.jpg \n inflating: train/dog.6896.jpg \n inflating: train/dog.6897.jpg \n inflating: train/dog.6898.jpg \n inflating: train/dog.6899.jpg \n inflating: train/dog.69.jpg \n inflating: train/dog.690.jpg \n inflating: train/dog.6900.jpg \n inflating: train/dog.6901.jpg \n inflating: train/dog.6902.jpg \n inflating: train/dog.6903.jpg \n inflating: train/dog.6904.jpg \n inflating: train/dog.6905.jpg \n inflating: train/dog.6906.jpg \n inflating: train/dog.6907.jpg \n inflating: train/dog.6908.jpg \n inflating: train/dog.6909.jpg \n inflating: train/dog.691.jpg \n inflating: train/dog.6910.jpg \n inflating: train/dog.6911.jpg \n inflating: train/dog.6912.jpg \n inflating: train/dog.6913.jpg \n inflating: train/dog.6914.jpg \n inflating: train/dog.6915.jpg \n inflating: train/dog.6916.jpg \n inflating: train/dog.6917.jpg \n inflating: train/dog.6918.jpg \n inflating: train/dog.6919.jpg \n inflating: train/dog.692.jpg \n inflating: train/dog.6920.jpg \n inflating: train/dog.6921.jpg \n inflating: train/dog.6922.jpg \n inflating: train/dog.6923.jpg \n inflating: train/dog.6924.jpg \n inflating: train/dog.6925.jpg \n inflating: train/dog.6926.jpg \n inflating: train/dog.6927.jpg \n inflating: train/dog.6928.jpg \n inflating: train/dog.6929.jpg \n inflating: train/dog.693.jpg \n inflating: train/dog.6930.jpg \n inflating: train/dog.6931.jpg \n inflating: train/dog.6932.jpg \n inflating: train/dog.6933.jpg \n inflating: train/dog.6934.jpg \n inflating: train/dog.6935.jpg \n inflating: train/dog.6936.jpg \n inflating: train/dog.6937.jpg \n inflating: train/dog.6938.jpg \n inflating: train/dog.6939.jpg \n inflating: train/dog.694.jpg \n inflating: train/dog.6940.jpg \n inflating: train/dog.6941.jpg \n inflating: train/dog.6942.jpg \n inflating: train/dog.6943.jpg \n inflating: train/dog.6944.jpg \n inflating: train/dog.6945.jpg \n inflating: train/dog.6946.jpg \n inflating: train/dog.6947.jpg \n inflating: train/dog.6948.jpg \n inflating: train/dog.6949.jpg \n inflating: train/dog.695.jpg \n inflating: train/dog.6950.jpg \n inflating: train/dog.6951.jpg \n inflating: train/dog.6952.jpg \n inflating: train/dog.6953.jpg \n inflating: train/dog.6954.jpg \n inflating: train/dog.6955.jpg \n inflating: train/dog.6956.jpg \n inflating: train/dog.6957.jpg \n inflating: train/dog.6958.jpg \n inflating: train/dog.6959.jpg \n inflating: train/dog.696.jpg \n inflating: train/dog.6960.jpg \n inflating: train/dog.6961.jpg \n inflating: train/dog.6962.jpg \n inflating: train/dog.6963.jpg \n inflating: train/dog.6964.jpg \n inflating: train/dog.6965.jpg \n inflating: train/dog.6966.jpg \n inflating: train/dog.6967.jpg \n inflating: train/dog.6968.jpg \n inflating: train/dog.6969.jpg \n inflating: train/dog.697.jpg \n inflating: train/dog.6970.jpg \n inflating: train/dog.6971.jpg \n inflating: train/dog.6972.jpg \n inflating: train/dog.6973.jpg \n inflating: train/dog.6974.jpg \n inflating: train/dog.6975.jpg \n inflating: train/dog.6976.jpg \n inflating: train/dog.6977.jpg \n inflating: train/dog.6978.jpg \n inflating: train/dog.6979.jpg \n inflating: train/dog.698.jpg \n inflating: train/dog.6980.jpg \n inflating: train/dog.6981.jpg \n inflating: train/dog.6982.jpg \n inflating: train/dog.6983.jpg \n inflating: train/dog.6984.jpg \n inflating: train/dog.6985.jpg \n inflating: train/dog.6986.jpg \n inflating: train/dog.6987.jpg \n inflating: train/dog.6988.jpg \n inflating: train/dog.6989.jpg \n inflating: train/dog.699.jpg \n inflating: train/dog.6990.jpg \n inflating: train/dog.6991.jpg \n inflating: train/dog.6992.jpg \n inflating: train/dog.6993.jpg \n inflating: train/dog.6994.jpg \n inflating: train/dog.6995.jpg \n inflating: train/dog.6996.jpg \n inflating: train/dog.6997.jpg \n inflating: train/dog.6998.jpg \n inflating: train/dog.6999.jpg \n inflating: train/dog.7.jpg \n inflating: train/dog.70.jpg \n inflating: train/dog.700.jpg \n inflating: train/dog.7000.jpg \n inflating: train/dog.7001.jpg \n inflating: train/dog.7002.jpg \n inflating: train/dog.7003.jpg \n inflating: train/dog.7004.jpg \n inflating: train/dog.7005.jpg \n inflating: train/dog.7006.jpg \n inflating: train/dog.7007.jpg \n inflating: train/dog.7008.jpg \n inflating: train/dog.7009.jpg \n inflating: train/dog.701.jpg \n inflating: train/dog.7010.jpg \n inflating: train/dog.7011.jpg \n inflating: train/dog.7012.jpg \n inflating: train/dog.7013.jpg \n inflating: train/dog.7014.jpg \n inflating: train/dog.7015.jpg \n inflating: train/dog.7016.jpg \n inflating: train/dog.7017.jpg \n inflating: train/dog.7018.jpg \n inflating: train/dog.7019.jpg \n inflating: train/dog.702.jpg \n inflating: train/dog.7020.jpg \n inflating: train/dog.7021.jpg \n inflating: train/dog.7022.jpg \n inflating: train/dog.7023.jpg \n inflating: train/dog.7024.jpg \n inflating: train/dog.7025.jpg \n inflating: train/dog.7026.jpg \n inflating: train/dog.7027.jpg \n inflating: train/dog.7028.jpg \n inflating: train/dog.7029.jpg \n inflating: train/dog.703.jpg \n inflating: train/dog.7030.jpg \n inflating: train/dog.7031.jpg \n inflating: train/dog.7032.jpg \n inflating: train/dog.7033.jpg \n inflating: train/dog.7034.jpg \n inflating: train/dog.7035.jpg \n inflating: train/dog.7036.jpg \n inflating: train/dog.7037.jpg \n inflating: train/dog.7038.jpg \n inflating: train/dog.7039.jpg \n inflating: train/dog.704.jpg \n inflating: train/dog.7040.jpg \n inflating: train/dog.7041.jpg \n inflating: train/dog.7042.jpg \n inflating: train/dog.7043.jpg \n inflating: train/dog.7044.jpg \n inflating: train/dog.7045.jpg \n inflating: train/dog.7046.jpg \n inflating: train/dog.7047.jpg \n inflating: train/dog.7048.jpg \n inflating: train/dog.7049.jpg \n inflating: train/dog.705.jpg \n inflating: train/dog.7050.jpg \n inflating: train/dog.7051.jpg \n inflating: train/dog.7052.jpg \n inflating: train/dog.7053.jpg \n inflating: train/dog.7054.jpg \n inflating: train/dog.7055.jpg \n inflating: train/dog.7056.jpg \n inflating: train/dog.7057.jpg \n inflating: train/dog.7058.jpg \n inflating: train/dog.7059.jpg \n inflating: train/dog.706.jpg \n inflating: train/dog.7060.jpg \n inflating: train/dog.7061.jpg \n inflating: train/dog.7062.jpg \n inflating: train/dog.7063.jpg \n inflating: train/dog.7064.jpg \n inflating: train/dog.7065.jpg \n inflating: train/dog.7066.jpg \n inflating: train/dog.7067.jpg \n inflating: train/dog.7068.jpg \n inflating: train/dog.7069.jpg \n inflating: train/dog.707.jpg \n inflating: train/dog.7070.jpg \n inflating: train/dog.7071.jpg \n inflating: train/dog.7072.jpg \n inflating: train/dog.7073.jpg \n inflating: train/dog.7074.jpg \n inflating: train/dog.7075.jpg \n inflating: train/dog.7076.jpg \n inflating: train/dog.7077.jpg \n inflating: train/dog.7078.jpg \n inflating: train/dog.7079.jpg \n inflating: train/dog.708.jpg \n inflating: train/dog.7080.jpg \n inflating: train/dog.7081.jpg \n inflating: train/dog.7082.jpg \n inflating: train/dog.7083.jpg \n inflating: train/dog.7084.jpg \n inflating: train/dog.7085.jpg \n inflating: train/dog.7086.jpg \n inflating: train/dog.7087.jpg \n inflating: train/dog.7088.jpg \n inflating: train/dog.7089.jpg \n inflating: train/dog.709.jpg \n inflating: train/dog.7090.jpg \n inflating: train/dog.7091.jpg \n inflating: train/dog.7092.jpg \n inflating: train/dog.7093.jpg \n inflating: train/dog.7094.jpg \n inflating: train/dog.7095.jpg \n inflating: train/dog.7096.jpg \n inflating: train/dog.7097.jpg \n inflating: train/dog.7098.jpg \n inflating: train/dog.7099.jpg \n inflating: train/dog.71.jpg \n inflating: train/dog.710.jpg \n inflating: train/dog.7100.jpg \n inflating: train/dog.7101.jpg \n inflating: train/dog.7102.jpg \n inflating: train/dog.7103.jpg \n inflating: train/dog.7104.jpg \n inflating: train/dog.7105.jpg \n inflating: train/dog.7106.jpg \n inflating: train/dog.7107.jpg \n inflating: train/dog.7108.jpg \n inflating: train/dog.7109.jpg \n inflating: train/dog.711.jpg \n inflating: train/dog.7110.jpg \n inflating: train/dog.7111.jpg \n inflating: train/dog.7112.jpg \n inflating: train/dog.7113.jpg \n inflating: train/dog.7114.jpg \n inflating: train/dog.7115.jpg \n inflating: train/dog.7116.jpg \n inflating: train/dog.7117.jpg \n inflating: train/dog.7118.jpg \n inflating: train/dog.7119.jpg \n inflating: train/dog.712.jpg \n inflating: train/dog.7120.jpg \n inflating: train/dog.7121.jpg \n inflating: train/dog.7122.jpg \n inflating: train/dog.7123.jpg \n inflating: train/dog.7124.jpg \n inflating: train/dog.7125.jpg \n inflating: train/dog.7126.jpg \n inflating: train/dog.7127.jpg \n inflating: train/dog.7128.jpg \n inflating: train/dog.7129.jpg \n inflating: train/dog.713.jpg \n inflating: train/dog.7130.jpg \n inflating: train/dog.7131.jpg \n inflating: train/dog.7132.jpg \n inflating: train/dog.7133.jpg \n inflating: train/dog.7134.jpg \n inflating: train/dog.7135.jpg \n inflating: train/dog.7136.jpg \n inflating: train/dog.7137.jpg \n inflating: train/dog.7138.jpg \n inflating: train/dog.7139.jpg \n inflating: train/dog.714.jpg \n inflating: train/dog.7140.jpg \n inflating: train/dog.7141.jpg \n inflating: train/dog.7142.jpg \n inflating: train/dog.7143.jpg \n inflating: train/dog.7144.jpg \n inflating: train/dog.7145.jpg \n inflating: train/dog.7146.jpg \n inflating: train/dog.7147.jpg \n inflating: train/dog.7148.jpg \n inflating: train/dog.7149.jpg \n inflating: train/dog.715.jpg \n inflating: train/dog.7150.jpg \n inflating: train/dog.7151.jpg \n inflating: train/dog.7152.jpg \n inflating: train/dog.7153.jpg \n inflating: train/dog.7154.jpg \n inflating: train/dog.7155.jpg \n inflating: train/dog.7156.jpg \n inflating: train/dog.7157.jpg \n inflating: train/dog.7158.jpg \n inflating: train/dog.7159.jpg \n inflating: train/dog.716.jpg \n inflating: train/dog.7160.jpg \n inflating: train/dog.7161.jpg \n inflating: train/dog.7162.jpg \n inflating: train/dog.7163.jpg \n inflating: train/dog.7164.jpg \n inflating: train/dog.7165.jpg \n inflating: train/dog.7166.jpg \n inflating: train/dog.7167.jpg \n inflating: train/dog.7168.jpg \n inflating: train/dog.7169.jpg \n inflating: train/dog.717.jpg \n inflating: train/dog.7170.jpg \n inflating: train/dog.7171.jpg \n inflating: train/dog.7172.jpg \n inflating: train/dog.7173.jpg \n inflating: train/dog.7174.jpg \n inflating: train/dog.7175.jpg \n inflating: train/dog.7176.jpg \n inflating: train/dog.7177.jpg \n inflating: train/dog.7178.jpg \n inflating: train/dog.7179.jpg \n inflating: train/dog.718.jpg \n inflating: train/dog.7180.jpg \n inflating: train/dog.7181.jpg \n inflating: train/dog.7182.jpg \n inflating: train/dog.7183.jpg \n inflating: train/dog.7184.jpg \n inflating: train/dog.7185.jpg \n inflating: train/dog.7186.jpg \n inflating: train/dog.7187.jpg \n inflating: train/dog.7188.jpg \n inflating: train/dog.7189.jpg \n inflating: train/dog.719.jpg \n inflating: train/dog.7190.jpg \n inflating: train/dog.7191.jpg \n inflating: train/dog.7192.jpg \n inflating: train/dog.7193.jpg \n inflating: train/dog.7194.jpg \n inflating: train/dog.7195.jpg \n inflating: train/dog.7196.jpg \n inflating: train/dog.7197.jpg \n inflating: train/dog.7198.jpg \n inflating: train/dog.7199.jpg \n inflating: train/dog.72.jpg \n inflating: train/dog.720.jpg \n inflating: train/dog.7200.jpg \n inflating: train/dog.7201.jpg \n inflating: train/dog.7202.jpg \n inflating: train/dog.7203.jpg \n inflating: train/dog.7204.jpg \n inflating: train/dog.7205.jpg \n inflating: train/dog.7206.jpg \n inflating: train/dog.7207.jpg \n inflating: train/dog.7208.jpg \n inflating: train/dog.7209.jpg \n inflating: train/dog.721.jpg \n inflating: train/dog.7210.jpg \n inflating: train/dog.7211.jpg \n inflating: train/dog.7212.jpg \n inflating: train/dog.7213.jpg \n inflating: train/dog.7214.jpg \n inflating: train/dog.7215.jpg \n inflating: train/dog.7216.jpg \n inflating: train/dog.7217.jpg \n inflating: train/dog.7218.jpg \n inflating: train/dog.7219.jpg \n inflating: train/dog.722.jpg \n inflating: train/dog.7220.jpg \n inflating: train/dog.7221.jpg \n inflating: train/dog.7222.jpg \n inflating: train/dog.7223.jpg \n inflating: train/dog.7224.jpg \n inflating: train/dog.7225.jpg \n inflating: train/dog.7226.jpg \n inflating: train/dog.7227.jpg \n inflating: train/dog.7228.jpg \n inflating: train/dog.7229.jpg \n inflating: train/dog.723.jpg \n inflating: train/dog.7230.jpg \n inflating: train/dog.7231.jpg \n inflating: train/dog.7232.jpg \n inflating: train/dog.7233.jpg \n inflating: train/dog.7234.jpg \n inflating: train/dog.7235.jpg \n inflating: train/dog.7236.jpg \n inflating: train/dog.7237.jpg \n inflating: train/dog.7238.jpg \n inflating: train/dog.7239.jpg \n inflating: train/dog.724.jpg \n inflating: train/dog.7240.jpg \n inflating: train/dog.7241.jpg \n inflating: train/dog.7242.jpg \n inflating: train/dog.7243.jpg \n inflating: train/dog.7244.jpg \n inflating: train/dog.7245.jpg \n inflating: train/dog.7246.jpg \n inflating: train/dog.7247.jpg \n inflating: train/dog.7248.jpg \n inflating: train/dog.7249.jpg \n inflating: train/dog.725.jpg \n inflating: train/dog.7250.jpg \n inflating: train/dog.7251.jpg \n inflating: train/dog.7252.jpg \n inflating: train/dog.7253.jpg \n inflating: train/dog.7254.jpg \n inflating: train/dog.7255.jpg \n inflating: train/dog.7256.jpg \n inflating: train/dog.7257.jpg \n inflating: train/dog.7258.jpg \n inflating: train/dog.7259.jpg \n inflating: train/dog.726.jpg \n inflating: train/dog.7260.jpg \n inflating: train/dog.7261.jpg \n inflating: train/dog.7262.jpg \n inflating: train/dog.7263.jpg \n inflating: train/dog.7264.jpg \n inflating: train/dog.7265.jpg \n inflating: train/dog.7266.jpg \n inflating: train/dog.7267.jpg \n inflating: train/dog.7268.jpg \n inflating: train/dog.7269.jpg \n inflating: train/dog.727.jpg \n inflating: train/dog.7270.jpg \n inflating: train/dog.7271.jpg \n inflating: train/dog.7272.jpg \n inflating: train/dog.7273.jpg \n inflating: train/dog.7274.jpg \n inflating: train/dog.7275.jpg \n inflating: train/dog.7276.jpg \n inflating: train/dog.7277.jpg \n inflating: train/dog.7278.jpg \n inflating: train/dog.7279.jpg \n inflating: train/dog.728.jpg \n inflating: train/dog.7280.jpg \n inflating: train/dog.7281.jpg \n inflating: train/dog.7282.jpg \n inflating: train/dog.7283.jpg \n inflating: train/dog.7284.jpg \n inflating: train/dog.7285.jpg \n inflating: train/dog.7286.jpg \n inflating: train/dog.7287.jpg \n inflating: train/dog.7288.jpg \n inflating: train/dog.7289.jpg \n inflating: train/dog.729.jpg \n inflating: train/dog.7290.jpg \n inflating: train/dog.7291.jpg \n inflating: train/dog.7292.jpg \n inflating: train/dog.7293.jpg \n inflating: train/dog.7294.jpg \n inflating: train/dog.7295.jpg \n inflating: train/dog.7296.jpg \n inflating: train/dog.7297.jpg \n inflating: train/dog.7298.jpg \n inflating: train/dog.7299.jpg \n inflating: train/dog.73.jpg \n inflating: train/dog.730.jpg \n inflating: train/dog.7300.jpg \n inflating: train/dog.7301.jpg \n inflating: train/dog.7302.jpg \n inflating: train/dog.7303.jpg \n inflating: train/dog.7304.jpg \n inflating: train/dog.7305.jpg \n inflating: train/dog.7306.jpg \n inflating: train/dog.7307.jpg \n inflating: train/dog.7308.jpg \n inflating: train/dog.7309.jpg \n inflating: train/dog.731.jpg \n inflating: train/dog.7310.jpg \n inflating: train/dog.7311.jpg \n inflating: train/dog.7312.jpg \n inflating: train/dog.7313.jpg \n inflating: train/dog.7314.jpg \n inflating: train/dog.7315.jpg \n inflating: train/dog.7316.jpg \n inflating: train/dog.7317.jpg \n inflating: train/dog.7318.jpg \n inflating: train/dog.7319.jpg \n inflating: train/dog.732.jpg \n inflating: train/dog.7320.jpg \n inflating: train/dog.7321.jpg \n inflating: train/dog.7322.jpg \n inflating: train/dog.7323.jpg \n inflating: train/dog.7324.jpg \n inflating: train/dog.7325.jpg \n inflating: train/dog.7326.jpg \n inflating: train/dog.7327.jpg \n inflating: train/dog.7328.jpg \n inflating: train/dog.7329.jpg \n inflating: train/dog.733.jpg \n inflating: train/dog.7330.jpg \n inflating: train/dog.7331.jpg \n inflating: train/dog.7332.jpg \n inflating: train/dog.7333.jpg \n inflating: train/dog.7334.jpg \n inflating: train/dog.7335.jpg \n inflating: train/dog.7336.jpg \n inflating: train/dog.7337.jpg \n inflating: train/dog.7338.jpg \n inflating: train/dog.7339.jpg \n inflating: train/dog.734.jpg \n inflating: train/dog.7340.jpg \n inflating: train/dog.7341.jpg \n inflating: train/dog.7342.jpg \n inflating: train/dog.7343.jpg \n inflating: train/dog.7344.jpg \n inflating: train/dog.7345.jpg \n inflating: train/dog.7346.jpg \n inflating: train/dog.7347.jpg \n inflating: train/dog.7348.jpg \n inflating: train/dog.7349.jpg \n inflating: train/dog.735.jpg \n inflating: train/dog.7350.jpg \n inflating: train/dog.7351.jpg \n inflating: train/dog.7352.jpg \n inflating: train/dog.7353.jpg \n inflating: train/dog.7354.jpg \n inflating: train/dog.7355.jpg \n inflating: train/dog.7356.jpg \n inflating: train/dog.7357.jpg \n inflating: train/dog.7358.jpg \n inflating: train/dog.7359.jpg \n inflating: train/dog.736.jpg \n inflating: train/dog.7360.jpg \n inflating: train/dog.7361.jpg \n inflating: train/dog.7362.jpg \n inflating: train/dog.7363.jpg \n inflating: train/dog.7364.jpg \n inflating: train/dog.7365.jpg \n inflating: train/dog.7366.jpg \n inflating: train/dog.7367.jpg \n inflating: train/dog.7368.jpg \n inflating: train/dog.7369.jpg \n inflating: train/dog.737.jpg \n inflating: train/dog.7370.jpg \n inflating: train/dog.7371.jpg \n inflating: train/dog.7372.jpg \n inflating: train/dog.7373.jpg \n inflating: train/dog.7374.jpg \n inflating: train/dog.7375.jpg \n inflating: train/dog.7376.jpg \n inflating: train/dog.7377.jpg \n inflating: train/dog.7378.jpg \n inflating: train/dog.7379.jpg \n inflating: train/dog.738.jpg \n inflating: train/dog.7380.jpg \n inflating: train/dog.7381.jpg \n inflating: train/dog.7382.jpg \n inflating: train/dog.7383.jpg \n inflating: train/dog.7384.jpg \n inflating: train/dog.7385.jpg \n inflating: train/dog.7386.jpg \n inflating: train/dog.7387.jpg \n inflating: train/dog.7388.jpg \n inflating: train/dog.7389.jpg \n inflating: train/dog.739.jpg \n inflating: train/dog.7390.jpg \n inflating: train/dog.7391.jpg \n inflating: train/dog.7392.jpg \n inflating: train/dog.7393.jpg \n inflating: train/dog.7394.jpg \n inflating: train/dog.7395.jpg \n inflating: train/dog.7396.jpg \n inflating: train/dog.7397.jpg \n inflating: train/dog.7398.jpg \n inflating: train/dog.7399.jpg \n inflating: train/dog.74.jpg \n inflating: train/dog.740.jpg \n inflating: train/dog.7400.jpg \n inflating: train/dog.7401.jpg \n inflating: train/dog.7402.jpg \n inflating: train/dog.7403.jpg \n inflating: train/dog.7404.jpg \n inflating: train/dog.7405.jpg \n inflating: train/dog.7406.jpg \n inflating: train/dog.7407.jpg \n inflating: train/dog.7408.jpg \n inflating: train/dog.7409.jpg \n inflating: train/dog.741.jpg \n inflating: train/dog.7410.jpg \n inflating: train/dog.7411.jpg \n inflating: train/dog.7412.jpg \n inflating: train/dog.7413.jpg \n inflating: train/dog.7414.jpg \n inflating: train/dog.7415.jpg \n inflating: train/dog.7416.jpg \n inflating: train/dog.7417.jpg \n inflating: train/dog.7418.jpg \n inflating: train/dog.7419.jpg \n inflating: train/dog.742.jpg \n inflating: train/dog.7420.jpg \n inflating: train/dog.7421.jpg \n inflating: train/dog.7422.jpg \n inflating: train/dog.7423.jpg \n inflating: train/dog.7424.jpg \n inflating: train/dog.7425.jpg \n inflating: train/dog.7426.jpg \n inflating: train/dog.7427.jpg \n inflating: train/dog.7428.jpg \n inflating: train/dog.7429.jpg \n inflating: train/dog.743.jpg \n inflating: train/dog.7430.jpg \n inflating: train/dog.7431.jpg \n inflating: train/dog.7432.jpg \n inflating: train/dog.7433.jpg \n inflating: train/dog.7434.jpg \n inflating: train/dog.7435.jpg \n inflating: train/dog.7436.jpg \n inflating: train/dog.7437.jpg \n inflating: train/dog.7438.jpg \n inflating: train/dog.7439.jpg \n inflating: train/dog.744.jpg \n inflating: train/dog.7440.jpg \n inflating: train/dog.7441.jpg \n inflating: train/dog.7442.jpg \n inflating: train/dog.7443.jpg \n inflating: train/dog.7444.jpg \n inflating: train/dog.7445.jpg \n inflating: train/dog.7446.jpg \n inflating: train/dog.7447.jpg \n inflating: train/dog.7448.jpg \n inflating: train/dog.7449.jpg \n inflating: train/dog.745.jpg \n inflating: train/dog.7450.jpg \n inflating: train/dog.7451.jpg \n inflating: train/dog.7452.jpg \n inflating: train/dog.7453.jpg \n inflating: train/dog.7454.jpg \n inflating: train/dog.7455.jpg \n inflating: train/dog.7456.jpg \n inflating: train/dog.7457.jpg \n inflating: train/dog.7458.jpg \n inflating: train/dog.7459.jpg \n inflating: train/dog.746.jpg \n inflating: train/dog.7460.jpg \n inflating: train/dog.7461.jpg \n inflating: train/dog.7462.jpg \n inflating: train/dog.7463.jpg \n inflating: train/dog.7464.jpg \n inflating: train/dog.7465.jpg \n inflating: train/dog.7466.jpg \n inflating: train/dog.7467.jpg \n inflating: train/dog.7468.jpg \n inflating: train/dog.7469.jpg \n inflating: train/dog.747.jpg \n inflating: train/dog.7470.jpg \n inflating: train/dog.7471.jpg \n inflating: train/dog.7472.jpg \n inflating: train/dog.7473.jpg \n inflating: train/dog.7474.jpg \n inflating: train/dog.7475.jpg \n inflating: train/dog.7476.jpg \n inflating: train/dog.7477.jpg \n inflating: train/dog.7478.jpg \n inflating: train/dog.7479.jpg \n inflating: train/dog.748.jpg \n inflating: train/dog.7480.jpg \n inflating: train/dog.7481.jpg \n inflating: train/dog.7482.jpg \n inflating: train/dog.7483.jpg \n inflating: train/dog.7484.jpg \n inflating: train/dog.7485.jpg \n inflating: train/dog.7486.jpg \n inflating: train/dog.7487.jpg \n inflating: train/dog.7488.jpg \n inflating: train/dog.7489.jpg \n inflating: train/dog.749.jpg \n inflating: train/dog.7490.jpg \n inflating: train/dog.7491.jpg \n inflating: train/dog.7492.jpg \n inflating: train/dog.7493.jpg \n inflating: train/dog.7494.jpg \n inflating: train/dog.7495.jpg \n inflating: train/dog.7496.jpg \n inflating: train/dog.7497.jpg \n inflating: train/dog.7498.jpg \n inflating: train/dog.7499.jpg \n inflating: train/dog.75.jpg \n inflating: train/dog.750.jpg \n inflating: train/dog.7500.jpg \n inflating: train/dog.7501.jpg \n inflating: train/dog.7502.jpg \n inflating: train/dog.7503.jpg \n inflating: train/dog.7504.jpg \n inflating: train/dog.7505.jpg \n inflating: train/dog.7506.jpg \n inflating: train/dog.7507.jpg \n inflating: train/dog.7508.jpg \n inflating: train/dog.7509.jpg \n inflating: train/dog.751.jpg \n inflating: train/dog.7510.jpg \n inflating: train/dog.7511.jpg \n inflating: train/dog.7512.jpg \n inflating: train/dog.7513.jpg \n inflating: train/dog.7514.jpg \n inflating: train/dog.7515.jpg \n inflating: train/dog.7516.jpg \n inflating: train/dog.7517.jpg \n inflating: train/dog.7518.jpg \n inflating: train/dog.7519.jpg \n inflating: train/dog.752.jpg \n inflating: train/dog.7520.jpg \n inflating: train/dog.7521.jpg \n inflating: train/dog.7522.jpg \n inflating: train/dog.7523.jpg \n inflating: train/dog.7524.jpg \n inflating: train/dog.7525.jpg \n inflating: train/dog.7526.jpg \n inflating: train/dog.7527.jpg \n inflating: train/dog.7528.jpg \n inflating: train/dog.7529.jpg \n inflating: train/dog.753.jpg \n inflating: train/dog.7530.jpg \n inflating: train/dog.7531.jpg \n inflating: train/dog.7532.jpg \n inflating: train/dog.7533.jpg \n inflating: train/dog.7534.jpg \n inflating: train/dog.7535.jpg \n inflating: train/dog.7536.jpg \n inflating: train/dog.7537.jpg \n inflating: train/dog.7538.jpg \n inflating: train/dog.7539.jpg \n inflating: train/dog.754.jpg \n inflating: train/dog.7540.jpg \n inflating: train/dog.7541.jpg \n inflating: train/dog.7542.jpg \n inflating: train/dog.7543.jpg \n inflating: train/dog.7544.jpg \n inflating: train/dog.7545.jpg \n inflating: train/dog.7546.jpg \n inflating: train/dog.7547.jpg \n inflating: train/dog.7548.jpg \n inflating: train/dog.7549.jpg \n inflating: train/dog.755.jpg \n inflating: train/dog.7550.jpg \n inflating: train/dog.7551.jpg \n inflating: train/dog.7552.jpg \n inflating: train/dog.7553.jpg \n inflating: train/dog.7554.jpg \n inflating: train/dog.7555.jpg \n inflating: train/dog.7556.jpg \n inflating: train/dog.7557.jpg \n inflating: train/dog.7558.jpg \n inflating: train/dog.7559.jpg \n inflating: train/dog.756.jpg \n inflating: train/dog.7560.jpg \n inflating: train/dog.7561.jpg \n inflating: train/dog.7562.jpg \n inflating: train/dog.7563.jpg \n inflating: train/dog.7564.jpg \n inflating: train/dog.7565.jpg \n inflating: train/dog.7566.jpg \n inflating: train/dog.7567.jpg \n inflating: train/dog.7568.jpg \n inflating: train/dog.7569.jpg \n inflating: train/dog.757.jpg \n inflating: train/dog.7570.jpg \n inflating: train/dog.7571.jpg \n inflating: train/dog.7572.jpg \n inflating: train/dog.7573.jpg \n inflating: train/dog.7574.jpg \n inflating: train/dog.7575.jpg \n inflating: train/dog.7576.jpg \n inflating: train/dog.7577.jpg \n inflating: train/dog.7578.jpg \n inflating: train/dog.7579.jpg \n inflating: train/dog.758.jpg \n inflating: train/dog.7580.jpg \n inflating: train/dog.7581.jpg \n inflating: train/dog.7582.jpg \n inflating: train/dog.7583.jpg \n inflating: train/dog.7584.jpg \n inflating: train/dog.7585.jpg \n inflating: train/dog.7586.jpg \n inflating: train/dog.7587.jpg \n inflating: train/dog.7588.jpg \n inflating: train/dog.7589.jpg \n inflating: train/dog.759.jpg \n inflating: train/dog.7590.jpg \n inflating: train/dog.7591.jpg \n inflating: train/dog.7592.jpg \n inflating: train/dog.7593.jpg \n inflating: train/dog.7594.jpg \n inflating: train/dog.7595.jpg \n inflating: train/dog.7596.jpg \n inflating: train/dog.7597.jpg \n inflating: train/dog.7598.jpg \n inflating: train/dog.7599.jpg \n inflating: train/dog.76.jpg \n inflating: train/dog.760.jpg \n inflating: train/dog.7600.jpg \n inflating: train/dog.7601.jpg \n inflating: train/dog.7602.jpg \n inflating: train/dog.7603.jpg \n inflating: train/dog.7604.jpg \n inflating: train/dog.7605.jpg \n inflating: train/dog.7606.jpg \n inflating: train/dog.7607.jpg \n inflating: train/dog.7608.jpg \n inflating: train/dog.7609.jpg \n inflating: train/dog.761.jpg \n inflating: train/dog.7610.jpg \n inflating: train/dog.7611.jpg \n inflating: train/dog.7612.jpg \n inflating: train/dog.7613.jpg \n inflating: train/dog.7614.jpg \n inflating: train/dog.7615.jpg \n inflating: train/dog.7616.jpg \n inflating: train/dog.7617.jpg \n inflating: train/dog.7618.jpg \n inflating: train/dog.7619.jpg \n inflating: train/dog.762.jpg \n inflating: train/dog.7620.jpg \n inflating: train/dog.7621.jpg \n inflating: train/dog.7622.jpg \n inflating: train/dog.7623.jpg \n inflating: train/dog.7624.jpg \n inflating: train/dog.7625.jpg \n inflating: train/dog.7626.jpg \n inflating: train/dog.7627.jpg \n inflating: train/dog.7628.jpg \n inflating: train/dog.7629.jpg \n inflating: train/dog.763.jpg \n inflating: train/dog.7630.jpg \n inflating: train/dog.7631.jpg \n inflating: train/dog.7632.jpg \n inflating: train/dog.7633.jpg \n inflating: train/dog.7634.jpg \n inflating: train/dog.7635.jpg \n inflating: train/dog.7636.jpg \n inflating: train/dog.7637.jpg \n inflating: train/dog.7638.jpg \n inflating: train/dog.7639.jpg \n inflating: train/dog.764.jpg \n inflating: train/dog.7640.jpg \n inflating: train/dog.7641.jpg \n inflating: train/dog.7642.jpg \n inflating: train/dog.7643.jpg \n inflating: train/dog.7644.jpg \n inflating: train/dog.7645.jpg \n inflating: train/dog.7646.jpg \n inflating: train/dog.7647.jpg \n inflating: train/dog.7648.jpg \n inflating: train/dog.7649.jpg \n inflating: train/dog.765.jpg \n inflating: train/dog.7650.jpg \n inflating: train/dog.7651.jpg \n inflating: train/dog.7652.jpg \n inflating: train/dog.7653.jpg \n inflating: train/dog.7654.jpg \n inflating: train/dog.7655.jpg \n inflating: train/dog.7656.jpg \n inflating: train/dog.7657.jpg \n inflating: train/dog.7658.jpg \n inflating: train/dog.7659.jpg \n inflating: train/dog.766.jpg \n inflating: train/dog.7660.jpg \n inflating: train/dog.7661.jpg \n inflating: train/dog.7662.jpg \n inflating: train/dog.7663.jpg \n inflating: train/dog.7664.jpg \n inflating: train/dog.7665.jpg \n inflating: train/dog.7666.jpg \n inflating: train/dog.7667.jpg \n inflating: train/dog.7668.jpg \n inflating: train/dog.7669.jpg \n inflating: train/dog.767.jpg \n inflating: train/dog.7670.jpg \n inflating: train/dog.7671.jpg \n inflating: train/dog.7672.jpg \n inflating: train/dog.7673.jpg \n inflating: train/dog.7674.jpg \n inflating: train/dog.7675.jpg \n inflating: train/dog.7676.jpg \n inflating: train/dog.7677.jpg \n inflating: train/dog.7678.jpg \n inflating: train/dog.7679.jpg \n inflating: train/dog.768.jpg \n inflating: train/dog.7680.jpg \n inflating: train/dog.7681.jpg \n inflating: train/dog.7682.jpg \n inflating: train/dog.7683.jpg \n inflating: train/dog.7684.jpg \n inflating: train/dog.7685.jpg \n inflating: train/dog.7686.jpg \n inflating: train/dog.7687.jpg \n inflating: train/dog.7688.jpg \n inflating: train/dog.7689.jpg \n inflating: train/dog.769.jpg \n inflating: train/dog.7690.jpg \n inflating: train/dog.7691.jpg \n inflating: train/dog.7692.jpg \n inflating: train/dog.7693.jpg \n inflating: train/dog.7694.jpg \n inflating: train/dog.7695.jpg \n inflating: train/dog.7696.jpg \n inflating: train/dog.7697.jpg \n inflating: train/dog.7698.jpg \n inflating: train/dog.7699.jpg \n inflating: train/dog.77.jpg \n inflating: train/dog.770.jpg \n inflating: train/dog.7700.jpg \n inflating: train/dog.7701.jpg \n inflating: train/dog.7702.jpg \n inflating: train/dog.7703.jpg \n inflating: train/dog.7704.jpg \n inflating: train/dog.7705.jpg \n inflating: train/dog.7706.jpg \n inflating: train/dog.7707.jpg \n inflating: train/dog.7708.jpg \n inflating: train/dog.7709.jpg \n inflating: train/dog.771.jpg \n inflating: train/dog.7710.jpg \n inflating: train/dog.7711.jpg \n inflating: train/dog.7712.jpg \n inflating: train/dog.7713.jpg \n inflating: train/dog.7714.jpg \n inflating: train/dog.7715.jpg \n inflating: train/dog.7716.jpg \n inflating: train/dog.7717.jpg \n inflating: train/dog.7718.jpg \n inflating: train/dog.7719.jpg \n inflating: train/dog.772.jpg \n inflating: train/dog.7720.jpg \n inflating: train/dog.7721.jpg \n inflating: train/dog.7722.jpg \n inflating: train/dog.7723.jpg \n inflating: train/dog.7724.jpg \n inflating: train/dog.7725.jpg \n inflating: train/dog.7726.jpg \n inflating: train/dog.7727.jpg \n inflating: train/dog.7728.jpg \n inflating: train/dog.7729.jpg \n inflating: train/dog.773.jpg \n inflating: train/dog.7730.jpg \n inflating: train/dog.7731.jpg \n inflating: train/dog.7732.jpg \n inflating: train/dog.7733.jpg \n inflating: train/dog.7734.jpg \n inflating: train/dog.7735.jpg \n inflating: train/dog.7736.jpg \n inflating: train/dog.7737.jpg \n inflating: train/dog.7738.jpg \n inflating: train/dog.7739.jpg \n inflating: train/dog.774.jpg \n inflating: train/dog.7740.jpg \n inflating: train/dog.7741.jpg \n inflating: train/dog.7742.jpg \n inflating: train/dog.7743.jpg \n inflating: train/dog.7744.jpg \n inflating: train/dog.7745.jpg \n inflating: train/dog.7746.jpg \n inflating: train/dog.7747.jpg \n inflating: train/dog.7748.jpg \n inflating: train/dog.7749.jpg \n inflating: train/dog.775.jpg \n inflating: train/dog.7750.jpg \n inflating: train/dog.7751.jpg \n inflating: train/dog.7752.jpg \n inflating: train/dog.7753.jpg \n inflating: train/dog.7754.jpg \n inflating: train/dog.7755.jpg \n inflating: train/dog.7756.jpg \n inflating: train/dog.7757.jpg \n inflating: train/dog.7758.jpg \n inflating: train/dog.7759.jpg \n inflating: train/dog.776.jpg \n inflating: train/dog.7760.jpg \n inflating: train/dog.7761.jpg \n inflating: train/dog.7762.jpg \n inflating: train/dog.7763.jpg \n inflating: train/dog.7764.jpg \n inflating: train/dog.7765.jpg \n inflating: train/dog.7766.jpg \n inflating: train/dog.7767.jpg \n inflating: train/dog.7768.jpg \n inflating: train/dog.7769.jpg \n inflating: train/dog.777.jpg \n inflating: train/dog.7770.jpg \n inflating: train/dog.7771.jpg \n inflating: train/dog.7772.jpg \n inflating: train/dog.7773.jpg \n inflating: train/dog.7774.jpg \n inflating: train/dog.7775.jpg \n inflating: train/dog.7776.jpg \n inflating: train/dog.7777.jpg \n inflating: train/dog.7778.jpg \n inflating: train/dog.7779.jpg \n inflating: train/dog.778.jpg \n inflating: train/dog.7780.jpg \n inflating: train/dog.7781.jpg \n inflating: train/dog.7782.jpg \n inflating: train/dog.7783.jpg \n inflating: train/dog.7784.jpg \n inflating: train/dog.7785.jpg \n inflating: train/dog.7786.jpg \n inflating: train/dog.7787.jpg \n inflating: train/dog.7788.jpg \n inflating: train/dog.7789.jpg \n inflating: train/dog.779.jpg \n inflating: train/dog.7790.jpg \n inflating: train/dog.7791.jpg \n inflating: train/dog.7792.jpg \n inflating: train/dog.7793.jpg \n inflating: train/dog.7794.jpg \n inflating: train/dog.7795.jpg \n inflating: train/dog.7796.jpg \n inflating: train/dog.7797.jpg \n inflating: train/dog.7798.jpg \n inflating: train/dog.7799.jpg \n inflating: train/dog.78.jpg \n inflating: train/dog.780.jpg \n inflating: train/dog.7800.jpg \n inflating: train/dog.7801.jpg \n inflating: train/dog.7802.jpg \n inflating: train/dog.7803.jpg \n inflating: train/dog.7804.jpg \n inflating: train/dog.7805.jpg \n inflating: train/dog.7806.jpg \n inflating: train/dog.7807.jpg \n inflating: train/dog.7808.jpg \n inflating: train/dog.7809.jpg \n inflating: train/dog.781.jpg \n inflating: train/dog.7810.jpg \n inflating: train/dog.7811.jpg \n inflating: train/dog.7812.jpg \n inflating: train/dog.7813.jpg \n inflating: train/dog.7814.jpg \n inflating: train/dog.7815.jpg \n inflating: train/dog.7816.jpg \n inflating: train/dog.7817.jpg \n inflating: train/dog.7818.jpg \n inflating: train/dog.7819.jpg \n inflating: train/dog.782.jpg \n inflating: train/dog.7820.jpg \n inflating: train/dog.7821.jpg \n inflating: train/dog.7822.jpg \n inflating: train/dog.7823.jpg \n inflating: train/dog.7824.jpg \n inflating: train/dog.7825.jpg \n inflating: train/dog.7826.jpg \n inflating: train/dog.7827.jpg \n inflating: train/dog.7828.jpg \n inflating: train/dog.7829.jpg \n inflating: train/dog.783.jpg \n inflating: train/dog.7830.jpg \n inflating: train/dog.7831.jpg \n inflating: train/dog.7832.jpg \n inflating: train/dog.7833.jpg \n inflating: train/dog.7834.jpg \n inflating: train/dog.7835.jpg \n inflating: train/dog.7836.jpg \n inflating: train/dog.7837.jpg \n inflating: train/dog.7838.jpg \n inflating: train/dog.7839.jpg \n inflating: train/dog.784.jpg \n inflating: train/dog.7840.jpg \n inflating: train/dog.7841.jpg \n inflating: train/dog.7842.jpg \n inflating: train/dog.7843.jpg \n inflating: train/dog.7844.jpg \n inflating: train/dog.7845.jpg \n inflating: train/dog.7846.jpg \n inflating: train/dog.7847.jpg \n inflating: train/dog.7848.jpg \n inflating: train/dog.7849.jpg \n inflating: train/dog.785.jpg \n inflating: train/dog.7850.jpg \n inflating: train/dog.7851.jpg \n inflating: train/dog.7852.jpg \n inflating: train/dog.7853.jpg \n inflating: train/dog.7854.jpg \n inflating: train/dog.7855.jpg \n inflating: train/dog.7856.jpg \n inflating: train/dog.7857.jpg \n inflating: train/dog.7858.jpg \n inflating: train/dog.7859.jpg \n inflating: train/dog.786.jpg \n inflating: train/dog.7860.jpg \n inflating: train/dog.7861.jpg \n inflating: train/dog.7862.jpg \n inflating: train/dog.7863.jpg \n inflating: train/dog.7864.jpg \n inflating: train/dog.7865.jpg \n inflating: train/dog.7866.jpg \n inflating: train/dog.7867.jpg \n inflating: train/dog.7868.jpg \n inflating: train/dog.7869.jpg \n inflating: train/dog.787.jpg \n inflating: train/dog.7870.jpg \n inflating: train/dog.7871.jpg \n inflating: train/dog.7872.jpg \n inflating: train/dog.7873.jpg \n inflating: train/dog.7874.jpg \n inflating: train/dog.7875.jpg \n inflating: train/dog.7876.jpg \n inflating: train/dog.7877.jpg \n inflating: train/dog.7878.jpg \n inflating: train/dog.7879.jpg \n inflating: train/dog.788.jpg \n inflating: train/dog.7880.jpg \n inflating: train/dog.7881.jpg \n inflating: train/dog.7882.jpg \n inflating: train/dog.7883.jpg \n inflating: train/dog.7884.jpg \n inflating: train/dog.7885.jpg \n inflating: train/dog.7886.jpg \n inflating: train/dog.7887.jpg \n inflating: train/dog.7888.jpg \n inflating: train/dog.7889.jpg \n inflating: train/dog.789.jpg \n inflating: train/dog.7890.jpg \n inflating: train/dog.7891.jpg \n inflating: train/dog.7892.jpg \n inflating: train/dog.7893.jpg \n inflating: train/dog.7894.jpg \n inflating: train/dog.7895.jpg \n inflating: train/dog.7896.jpg \n inflating: train/dog.7897.jpg \n inflating: train/dog.7898.jpg \n inflating: train/dog.7899.jpg \n inflating: train/dog.79.jpg \n inflating: train/dog.790.jpg \n inflating: train/dog.7900.jpg \n inflating: train/dog.7901.jpg \n inflating: train/dog.7902.jpg \n inflating: train/dog.7903.jpg \n inflating: train/dog.7904.jpg \n inflating: train/dog.7905.jpg \n inflating: train/dog.7906.jpg \n inflating: train/dog.7907.jpg \n inflating: train/dog.7908.jpg \n inflating: train/dog.7909.jpg \n inflating: train/dog.791.jpg \n inflating: train/dog.7910.jpg \n inflating: train/dog.7911.jpg \n inflating: train/dog.7912.jpg \n inflating: train/dog.7913.jpg \n inflating: train/dog.7914.jpg \n inflating: train/dog.7915.jpg \n inflating: train/dog.7916.jpg \n inflating: train/dog.7917.jpg \n inflating: train/dog.7918.jpg \n inflating: train/dog.7919.jpg \n inflating: train/dog.792.jpg \n inflating: train/dog.7920.jpg \n inflating: train/dog.7921.jpg \n inflating: train/dog.7922.jpg \n inflating: train/dog.7923.jpg \n inflating: train/dog.7924.jpg \n inflating: train/dog.7925.jpg \n inflating: train/dog.7926.jpg \n inflating: train/dog.7927.jpg \n inflating: train/dog.7928.jpg \n inflating: train/dog.7929.jpg \n inflating: train/dog.793.jpg \n inflating: train/dog.7930.jpg \n inflating: train/dog.7931.jpg \n inflating: train/dog.7932.jpg \n inflating: train/dog.7933.jpg \n inflating: train/dog.7934.jpg \n inflating: train/dog.7935.jpg \n inflating: train/dog.7936.jpg \n inflating: train/dog.7937.jpg \n inflating: train/dog.7938.jpg \n inflating: train/dog.7939.jpg \n inflating: train/dog.794.jpg \n inflating: train/dog.7940.jpg \n inflating: train/dog.7941.jpg \n inflating: train/dog.7942.jpg \n inflating: train/dog.7943.jpg \n inflating: train/dog.7944.jpg \n inflating: train/dog.7945.jpg \n inflating: train/dog.7946.jpg \n inflating: train/dog.7947.jpg \n inflating: train/dog.7948.jpg \n inflating: train/dog.7949.jpg \n inflating: train/dog.795.jpg \n inflating: train/dog.7950.jpg \n inflating: train/dog.7951.jpg \n inflating: train/dog.7952.jpg \n inflating: train/dog.7953.jpg \n inflating: train/dog.7954.jpg \n inflating: train/dog.7955.jpg \n inflating: train/dog.7956.jpg \n inflating: train/dog.7957.jpg \n inflating: train/dog.7958.jpg \n inflating: train/dog.7959.jpg \n inflating: train/dog.796.jpg \n inflating: train/dog.7960.jpg \n inflating: train/dog.7961.jpg \n inflating: train/dog.7962.jpg \n inflating: train/dog.7963.jpg \n inflating: train/dog.7964.jpg \n inflating: train/dog.7965.jpg \n inflating: train/dog.7966.jpg \n inflating: train/dog.7967.jpg \n inflating: train/dog.7968.jpg \n inflating: train/dog.7969.jpg \n inflating: train/dog.797.jpg \n inflating: train/dog.7970.jpg \n inflating: train/dog.7971.jpg \n inflating: train/dog.7972.jpg \n inflating: train/dog.7973.jpg \n inflating: train/dog.7974.jpg \n inflating: train/dog.7975.jpg \n inflating: train/dog.7976.jpg \n inflating: train/dog.7977.jpg \n inflating: train/dog.7978.jpg \n inflating: train/dog.7979.jpg \n inflating: train/dog.798.jpg \n inflating: train/dog.7980.jpg \n inflating: train/dog.7981.jpg \n inflating: train/dog.7982.jpg \n inflating: train/dog.7983.jpg \n inflating: train/dog.7984.jpg \n inflating: train/dog.7985.jpg \n inflating: train/dog.7986.jpg \n inflating: train/dog.7987.jpg \n inflating: train/dog.7988.jpg \n inflating: train/dog.7989.jpg \n inflating: train/dog.799.jpg \n inflating: train/dog.7990.jpg \n inflating: train/dog.7991.jpg \n inflating: train/dog.7992.jpg \n inflating: train/dog.7993.jpg \n inflating: train/dog.7994.jpg \n inflating: train/dog.7995.jpg \n inflating: train/dog.7996.jpg \n inflating: train/dog.7997.jpg \n inflating: train/dog.7998.jpg \n inflating: train/dog.7999.jpg \n inflating: train/dog.8.jpg \n inflating: train/dog.80.jpg \n inflating: train/dog.800.jpg \n inflating: train/dog.8000.jpg \n inflating: train/dog.8001.jpg \n inflating: train/dog.8002.jpg \n inflating: train/dog.8003.jpg \n inflating: train/dog.8004.jpg \n inflating: train/dog.8005.jpg \n inflating: train/dog.8006.jpg \n inflating: train/dog.8007.jpg \n inflating: train/dog.8008.jpg \n inflating: train/dog.8009.jpg \n inflating: train/dog.801.jpg \n inflating: train/dog.8010.jpg \n inflating: train/dog.8011.jpg \n inflating: train/dog.8012.jpg \n inflating: train/dog.8013.jpg \n inflating: train/dog.8014.jpg \n inflating: train/dog.8015.jpg \n inflating: train/dog.8016.jpg \n inflating: train/dog.8017.jpg \n inflating: train/dog.8018.jpg \n inflating: train/dog.8019.jpg \n inflating: train/dog.802.jpg \n inflating: train/dog.8020.jpg \n inflating: train/dog.8021.jpg \n inflating: train/dog.8022.jpg \n inflating: train/dog.8023.jpg \n inflating: train/dog.8024.jpg \n inflating: train/dog.8025.jpg \n inflating: train/dog.8026.jpg \n inflating: train/dog.8027.jpg \n inflating: train/dog.8028.jpg \n inflating: train/dog.8029.jpg \n inflating: train/dog.803.jpg \n inflating: train/dog.8030.jpg \n inflating: train/dog.8031.jpg \n inflating: train/dog.8032.jpg \n inflating: train/dog.8033.jpg \n inflating: train/dog.8034.jpg \n inflating: train/dog.8035.jpg \n inflating: train/dog.8036.jpg \n inflating: train/dog.8037.jpg \n inflating: train/dog.8038.jpg \n inflating: train/dog.8039.jpg \n inflating: train/dog.804.jpg \n inflating: train/dog.8040.jpg \n inflating: train/dog.8041.jpg \n inflating: train/dog.8042.jpg \n inflating: train/dog.8043.jpg \n inflating: train/dog.8044.jpg \n inflating: train/dog.8045.jpg \n inflating: train/dog.8046.jpg \n inflating: train/dog.8047.jpg \n inflating: train/dog.8048.jpg \n inflating: train/dog.8049.jpg \n inflating: train/dog.805.jpg \n inflating: train/dog.8050.jpg \n inflating: train/dog.8051.jpg \n inflating: train/dog.8052.jpg \n inflating: train/dog.8053.jpg \n inflating: train/dog.8054.jpg \n inflating: train/dog.8055.jpg \n inflating: train/dog.8056.jpg \n inflating: train/dog.8057.jpg \n inflating: train/dog.8058.jpg \n inflating: train/dog.8059.jpg \n inflating: train/dog.806.jpg \n inflating: train/dog.8060.jpg \n inflating: train/dog.8061.jpg \n inflating: train/dog.8062.jpg \n inflating: train/dog.8063.jpg \n inflating: train/dog.8064.jpg \n inflating: train/dog.8065.jpg \n inflating: train/dog.8066.jpg \n inflating: train/dog.8067.jpg \n inflating: train/dog.8068.jpg \n inflating: train/dog.8069.jpg \n inflating: train/dog.807.jpg \n inflating: train/dog.8070.jpg \n inflating: train/dog.8071.jpg \n inflating: train/dog.8072.jpg \n inflating: train/dog.8073.jpg \n inflating: train/dog.8074.jpg \n inflating: train/dog.8075.jpg \n inflating: train/dog.8076.jpg \n inflating: train/dog.8077.jpg \n inflating: train/dog.8078.jpg \n inflating: train/dog.8079.jpg \n inflating: train/dog.808.jpg \n inflating: train/dog.8080.jpg \n inflating: train/dog.8081.jpg \n inflating: train/dog.8082.jpg \n inflating: train/dog.8083.jpg \n inflating: train/dog.8084.jpg \n inflating: train/dog.8085.jpg \n inflating: train/dog.8086.jpg \n inflating: train/dog.8087.jpg \n inflating: train/dog.8088.jpg \n inflating: train/dog.8089.jpg \n inflating: train/dog.809.jpg \n inflating: train/dog.8090.jpg \n inflating: train/dog.8091.jpg \n inflating: train/dog.8092.jpg \n inflating: train/dog.8093.jpg \n inflating: train/dog.8094.jpg \n inflating: train/dog.8095.jpg \n inflating: train/dog.8096.jpg \n inflating: train/dog.8097.jpg \n inflating: train/dog.8098.jpg \n inflating: train/dog.8099.jpg \n inflating: train/dog.81.jpg \n inflating: train/dog.810.jpg \n inflating: train/dog.8100.jpg \n inflating: train/dog.8101.jpg \n inflating: train/dog.8102.jpg \n inflating: train/dog.8103.jpg \n inflating: train/dog.8104.jpg \n inflating: train/dog.8105.jpg \n inflating: train/dog.8106.jpg \n inflating: train/dog.8107.jpg \n inflating: train/dog.8108.jpg \n inflating: train/dog.8109.jpg \n inflating: train/dog.811.jpg \n inflating: train/dog.8110.jpg \n inflating: train/dog.8111.jpg \n inflating: train/dog.8112.jpg \n inflating: train/dog.8113.jpg \n inflating: train/dog.8114.jpg \n inflating: train/dog.8115.jpg \n inflating: train/dog.8116.jpg \n inflating: train/dog.8117.jpg \n inflating: train/dog.8118.jpg \n inflating: train/dog.8119.jpg \n inflating: train/dog.812.jpg \n inflating: train/dog.8120.jpg \n inflating: train/dog.8121.jpg \n inflating: train/dog.8122.jpg \n inflating: train/dog.8123.jpg \n inflating: train/dog.8124.jpg \n inflating: train/dog.8125.jpg \n inflating: train/dog.8126.jpg \n inflating: train/dog.8127.jpg \n inflating: train/dog.8128.jpg \n inflating: train/dog.8129.jpg \n inflating: train/dog.813.jpg \n inflating: train/dog.8130.jpg \n inflating: train/dog.8131.jpg \n inflating: train/dog.8132.jpg \n inflating: train/dog.8133.jpg \n inflating: train/dog.8134.jpg \n inflating: train/dog.8135.jpg \n inflating: train/dog.8136.jpg \n inflating: train/dog.8137.jpg \n inflating: train/dog.8138.jpg \n inflating: train/dog.8139.jpg \n inflating: train/dog.814.jpg \n inflating: train/dog.8140.jpg \n inflating: train/dog.8141.jpg \n inflating: train/dog.8142.jpg \n inflating: train/dog.8143.jpg \n inflating: train/dog.8144.jpg \n inflating: train/dog.8145.jpg \n inflating: train/dog.8146.jpg \n inflating: train/dog.8147.jpg \n inflating: train/dog.8148.jpg \n inflating: train/dog.8149.jpg \n inflating: train/dog.815.jpg \n inflating: train/dog.8150.jpg \n inflating: train/dog.8151.jpg \n inflating: train/dog.8152.jpg \n inflating: train/dog.8153.jpg \n inflating: train/dog.8154.jpg \n inflating: train/dog.8155.jpg \n inflating: train/dog.8156.jpg \n inflating: train/dog.8157.jpg \n inflating: train/dog.8158.jpg \n inflating: train/dog.8159.jpg \n inflating: train/dog.816.jpg \n inflating: train/dog.8160.jpg \n inflating: train/dog.8161.jpg \n inflating: train/dog.8162.jpg \n inflating: train/dog.8163.jpg \n inflating: train/dog.8164.jpg \n inflating: train/dog.8165.jpg \n inflating: train/dog.8166.jpg \n inflating: train/dog.8167.jpg \n inflating: train/dog.8168.jpg \n inflating: train/dog.8169.jpg \n inflating: train/dog.817.jpg \n inflating: train/dog.8170.jpg \n inflating: train/dog.8171.jpg \n inflating: train/dog.8172.jpg \n inflating: train/dog.8173.jpg \n inflating: train/dog.8174.jpg \n inflating: train/dog.8175.jpg \n inflating: train/dog.8176.jpg \n inflating: train/dog.8177.jpg \n inflating: train/dog.8178.jpg \n inflating: train/dog.8179.jpg \n inflating: train/dog.818.jpg \n inflating: train/dog.8180.jpg \n inflating: train/dog.8181.jpg \n inflating: train/dog.8182.jpg \n inflating: train/dog.8183.jpg \n inflating: train/dog.8184.jpg \n inflating: train/dog.8185.jpg \n inflating: train/dog.8186.jpg \n inflating: train/dog.8187.jpg \n inflating: train/dog.8188.jpg \n inflating: train/dog.8189.jpg \n inflating: train/dog.819.jpg \n inflating: train/dog.8190.jpg \n inflating: train/dog.8191.jpg \n inflating: train/dog.8192.jpg \n inflating: train/dog.8193.jpg \n inflating: train/dog.8194.jpg \n inflating: train/dog.8195.jpg \n inflating: train/dog.8196.jpg \n inflating: train/dog.8197.jpg \n inflating: train/dog.8198.jpg \n inflating: train/dog.8199.jpg \n inflating: train/dog.82.jpg \n inflating: train/dog.820.jpg \n inflating: train/dog.8200.jpg \n inflating: train/dog.8201.jpg \n inflating: train/dog.8202.jpg \n inflating: train/dog.8203.jpg \n inflating: train/dog.8204.jpg \n inflating: train/dog.8205.jpg \n inflating: train/dog.8206.jpg \n inflating: train/dog.8207.jpg \n inflating: train/dog.8208.jpg \n inflating: train/dog.8209.jpg \n inflating: train/dog.821.jpg \n inflating: train/dog.8210.jpg \n inflating: train/dog.8211.jpg \n inflating: train/dog.8212.jpg \n inflating: train/dog.8213.jpg \n inflating: train/dog.8214.jpg \n inflating: train/dog.8215.jpg \n inflating: train/dog.8216.jpg \n inflating: train/dog.8217.jpg \n inflating: train/dog.8218.jpg \n inflating: train/dog.8219.jpg \n inflating: train/dog.822.jpg \n inflating: train/dog.8220.jpg \n inflating: train/dog.8221.jpg \n inflating: train/dog.8222.jpg \n inflating: train/dog.8223.jpg \n inflating: train/dog.8224.jpg \n inflating: train/dog.8225.jpg \n inflating: train/dog.8226.jpg \n inflating: train/dog.8227.jpg \n inflating: train/dog.8228.jpg \n inflating: train/dog.8229.jpg \n inflating: train/dog.823.jpg \n inflating: train/dog.8230.jpg \n inflating: train/dog.8231.jpg \n inflating: train/dog.8232.jpg \n inflating: train/dog.8233.jpg \n inflating: train/dog.8234.jpg \n inflating: train/dog.8235.jpg \n inflating: train/dog.8236.jpg \n inflating: train/dog.8237.jpg \n inflating: train/dog.8238.jpg \n inflating: train/dog.8239.jpg \n inflating: train/dog.824.jpg \n inflating: train/dog.8240.jpg \n inflating: train/dog.8241.jpg \n inflating: train/dog.8242.jpg \n inflating: train/dog.8243.jpg \n inflating: train/dog.8244.jpg \n inflating: train/dog.8245.jpg \n inflating: train/dog.8246.jpg \n inflating: train/dog.8247.jpg \n inflating: train/dog.8248.jpg \n inflating: train/dog.8249.jpg \n inflating: train/dog.825.jpg \n inflating: train/dog.8250.jpg \n inflating: train/dog.8251.jpg \n inflating: train/dog.8252.jpg \n inflating: train/dog.8253.jpg \n inflating: train/dog.8254.jpg \n inflating: train/dog.8255.jpg \n inflating: train/dog.8256.jpg \n inflating: train/dog.8257.jpg \n inflating: train/dog.8258.jpg \n inflating: train/dog.8259.jpg \n inflating: train/dog.826.jpg \n inflating: train/dog.8260.jpg \n inflating: train/dog.8261.jpg \n inflating: train/dog.8262.jpg \n inflating: train/dog.8263.jpg \n inflating: train/dog.8264.jpg \n inflating: train/dog.8265.jpg \n inflating: train/dog.8266.jpg \n inflating: train/dog.8267.jpg \n inflating: train/dog.8268.jpg \n inflating: train/dog.8269.jpg \n inflating: train/dog.827.jpg \n inflating: train/dog.8270.jpg \n inflating: train/dog.8271.jpg \n inflating: train/dog.8272.jpg \n inflating: train/dog.8273.jpg \n inflating: train/dog.8274.jpg \n inflating: train/dog.8275.jpg \n inflating: train/dog.8276.jpg \n inflating: train/dog.8277.jpg \n inflating: train/dog.8278.jpg \n inflating: train/dog.8279.jpg \n inflating: train/dog.828.jpg \n inflating: train/dog.8280.jpg \n inflating: train/dog.8281.jpg \n inflating: train/dog.8282.jpg \n inflating: train/dog.8283.jpg \n inflating: train/dog.8284.jpg \n inflating: train/dog.8285.jpg \n inflating: train/dog.8286.jpg \n inflating: train/dog.8287.jpg \n inflating: train/dog.8288.jpg \n inflating: train/dog.8289.jpg \n inflating: train/dog.829.jpg \n inflating: train/dog.8290.jpg \n inflating: train/dog.8291.jpg \n inflating: train/dog.8292.jpg \n inflating: train/dog.8293.jpg \n inflating: train/dog.8294.jpg \n inflating: train/dog.8295.jpg \n inflating: train/dog.8296.jpg \n inflating: train/dog.8297.jpg \n inflating: train/dog.8298.jpg \n inflating: train/dog.8299.jpg \n inflating: train/dog.83.jpg \n inflating: train/dog.830.jpg \n inflating: train/dog.8300.jpg \n inflating: train/dog.8301.jpg \n inflating: train/dog.8302.jpg \n inflating: train/dog.8303.jpg \n inflating: train/dog.8304.jpg \n inflating: train/dog.8305.jpg \n inflating: train/dog.8306.jpg \n inflating: train/dog.8307.jpg \n inflating: train/dog.8308.jpg \n inflating: train/dog.8309.jpg \n inflating: train/dog.831.jpg \n inflating: train/dog.8310.jpg \n inflating: train/dog.8311.jpg \n inflating: train/dog.8312.jpg \n inflating: train/dog.8313.jpg \n inflating: train/dog.8314.jpg \n inflating: train/dog.8315.jpg \n inflating: train/dog.8316.jpg \n inflating: train/dog.8317.jpg \n inflating: train/dog.8318.jpg \n inflating: train/dog.8319.jpg \n inflating: train/dog.832.jpg \n inflating: train/dog.8320.jpg \n inflating: train/dog.8321.jpg \n inflating: train/dog.8322.jpg \n inflating: train/dog.8323.jpg \n inflating: train/dog.8324.jpg \n inflating: train/dog.8325.jpg \n inflating: train/dog.8326.jpg \n inflating: train/dog.8327.jpg \n inflating: train/dog.8328.jpg \n inflating: train/dog.8329.jpg \n inflating: train/dog.833.jpg \n inflating: train/dog.8330.jpg \n inflating: train/dog.8331.jpg \n inflating: train/dog.8332.jpg \n inflating: train/dog.8333.jpg \n inflating: train/dog.8334.jpg \n inflating: train/dog.8335.jpg \n inflating: train/dog.8336.jpg \n inflating: train/dog.8337.jpg \n inflating: train/dog.8338.jpg \n inflating: train/dog.8339.jpg \n inflating: train/dog.834.jpg \n inflating: train/dog.8340.jpg \n inflating: train/dog.8341.jpg \n inflating: train/dog.8342.jpg \n inflating: train/dog.8343.jpg \n inflating: train/dog.8344.jpg \n inflating: train/dog.8345.jpg \n inflating: train/dog.8346.jpg \n inflating: train/dog.8347.jpg \n inflating: train/dog.8348.jpg \n inflating: train/dog.8349.jpg \n inflating: train/dog.835.jpg \n inflating: train/dog.8350.jpg \n inflating: train/dog.8351.jpg \n inflating: train/dog.8352.jpg \n inflating: train/dog.8353.jpg \n inflating: train/dog.8354.jpg \n inflating: train/dog.8355.jpg \n inflating: train/dog.8356.jpg \n inflating: train/dog.8357.jpg \n inflating: train/dog.8358.jpg \n inflating: train/dog.8359.jpg \n inflating: train/dog.836.jpg \n inflating: train/dog.8360.jpg \n inflating: train/dog.8361.jpg \n inflating: train/dog.8362.jpg \n inflating: train/dog.8363.jpg \n inflating: train/dog.8364.jpg \n inflating: train/dog.8365.jpg \n inflating: train/dog.8366.jpg \n inflating: train/dog.8367.jpg \n inflating: train/dog.8368.jpg \n inflating: train/dog.8369.jpg \n inflating: train/dog.837.jpg \n inflating: train/dog.8370.jpg \n inflating: train/dog.8371.jpg \n inflating: train/dog.8372.jpg \n inflating: train/dog.8373.jpg \n inflating: train/dog.8374.jpg \n inflating: train/dog.8375.jpg \n inflating: train/dog.8376.jpg \n inflating: train/dog.8377.jpg \n inflating: train/dog.8378.jpg \n inflating: train/dog.8379.jpg \n inflating: train/dog.838.jpg \n inflating: train/dog.8380.jpg \n inflating: train/dog.8381.jpg \n inflating: train/dog.8382.jpg \n inflating: train/dog.8383.jpg \n inflating: train/dog.8384.jpg \n inflating: train/dog.8385.jpg \n inflating: train/dog.8386.jpg \n inflating: train/dog.8387.jpg \n inflating: train/dog.8388.jpg \n inflating: train/dog.8389.jpg \n inflating: train/dog.839.jpg \n inflating: train/dog.8390.jpg \n inflating: train/dog.8391.jpg \n inflating: train/dog.8392.jpg \n inflating: train/dog.8393.jpg \n inflating: train/dog.8394.jpg \n inflating: train/dog.8395.jpg \n inflating: train/dog.8396.jpg \n inflating: train/dog.8397.jpg \n inflating: train/dog.8398.jpg \n inflating: train/dog.8399.jpg \n inflating: train/dog.84.jpg \n inflating: train/dog.840.jpg \n inflating: train/dog.8400.jpg \n inflating: train/dog.8401.jpg \n inflating: train/dog.8402.jpg \n inflating: train/dog.8403.jpg \n inflating: train/dog.8404.jpg \n inflating: train/dog.8405.jpg \n inflating: train/dog.8406.jpg \n inflating: train/dog.8407.jpg \n inflating: train/dog.8408.jpg \n inflating: train/dog.8409.jpg \n inflating: train/dog.841.jpg \n inflating: train/dog.8410.jpg \n inflating: train/dog.8411.jpg \n inflating: train/dog.8412.jpg \n inflating: train/dog.8413.jpg \n inflating: train/dog.8414.jpg \n inflating: train/dog.8415.jpg \n inflating: train/dog.8416.jpg \n inflating: train/dog.8417.jpg \n inflating: train/dog.8418.jpg \n inflating: train/dog.8419.jpg \n inflating: train/dog.842.jpg \n inflating: train/dog.8420.jpg \n inflating: train/dog.8421.jpg \n inflating: train/dog.8422.jpg \n inflating: train/dog.8423.jpg \n inflating: train/dog.8424.jpg \n inflating: train/dog.8425.jpg \n inflating: train/dog.8426.jpg \n inflating: train/dog.8427.jpg \n inflating: train/dog.8428.jpg \n inflating: train/dog.8429.jpg \n inflating: train/dog.843.jpg \n inflating: train/dog.8430.jpg \n inflating: train/dog.8431.jpg \n inflating: train/dog.8432.jpg \n inflating: train/dog.8433.jpg \n inflating: train/dog.8434.jpg \n inflating: train/dog.8435.jpg \n inflating: train/dog.8436.jpg \n inflating: train/dog.8437.jpg \n inflating: train/dog.8438.jpg \n inflating: train/dog.8439.jpg \n inflating: train/dog.844.jpg \n inflating: train/dog.8440.jpg \n inflating: train/dog.8441.jpg \n inflating: train/dog.8442.jpg \n inflating: train/dog.8443.jpg \n inflating: train/dog.8444.jpg \n inflating: train/dog.8445.jpg \n inflating: train/dog.8446.jpg \n inflating: train/dog.8447.jpg \n inflating: train/dog.8448.jpg \n inflating: train/dog.8449.jpg \n inflating: train/dog.845.jpg \n inflating: train/dog.8450.jpg \n inflating: train/dog.8451.jpg \n inflating: train/dog.8452.jpg \n inflating: train/dog.8453.jpg \n inflating: train/dog.8454.jpg \n inflating: train/dog.8455.jpg \n inflating: train/dog.8456.jpg \n inflating: train/dog.8457.jpg \n inflating: train/dog.8458.jpg \n inflating: train/dog.8459.jpg \n inflating: train/dog.846.jpg \n inflating: train/dog.8460.jpg \n inflating: train/dog.8461.jpg \n inflating: train/dog.8462.jpg \n inflating: train/dog.8463.jpg \n inflating: train/dog.8464.jpg \n inflating: train/dog.8465.jpg \n inflating: train/dog.8466.jpg \n inflating: train/dog.8467.jpg \n inflating: train/dog.8468.jpg \n inflating: train/dog.8469.jpg \n inflating: train/dog.847.jpg \n inflating: train/dog.8470.jpg \n inflating: train/dog.8471.jpg \n inflating: train/dog.8472.jpg \n inflating: train/dog.8473.jpg \n inflating: train/dog.8474.jpg \n inflating: train/dog.8475.jpg \n inflating: train/dog.8476.jpg \n inflating: train/dog.8477.jpg \n inflating: train/dog.8478.jpg \n inflating: train/dog.8479.jpg \n inflating: train/dog.848.jpg \n inflating: train/dog.8480.jpg \n inflating: train/dog.8481.jpg \n inflating: train/dog.8482.jpg \n inflating: train/dog.8483.jpg \n inflating: train/dog.8484.jpg \n inflating: train/dog.8485.jpg \n inflating: train/dog.8486.jpg \n inflating: train/dog.8487.jpg \n inflating: train/dog.8488.jpg \n inflating: train/dog.8489.jpg \n inflating: train/dog.849.jpg \n inflating: train/dog.8490.jpg \n inflating: train/dog.8491.jpg \n inflating: train/dog.8492.jpg \n inflating: train/dog.8493.jpg \n inflating: train/dog.8494.jpg \n inflating: train/dog.8495.jpg \n inflating: train/dog.8496.jpg \n inflating: train/dog.8497.jpg \n inflating: train/dog.8498.jpg \n inflating: train/dog.8499.jpg \n inflating: train/dog.85.jpg \n inflating: train/dog.850.jpg \n inflating: train/dog.8500.jpg \n inflating: train/dog.8501.jpg \n inflating: train/dog.8502.jpg \n inflating: train/dog.8503.jpg \n inflating: train/dog.8504.jpg \n inflating: train/dog.8505.jpg \n inflating: train/dog.8506.jpg \n inflating: train/dog.8507.jpg \n inflating: train/dog.8508.jpg \n inflating: train/dog.8509.jpg \n inflating: train/dog.851.jpg \n inflating: train/dog.8510.jpg \n inflating: train/dog.8511.jpg \n inflating: train/dog.8512.jpg \n inflating: train/dog.8513.jpg \n inflating: train/dog.8514.jpg \n inflating: train/dog.8515.jpg \n inflating: train/dog.8516.jpg \n inflating: train/dog.8517.jpg \n inflating: train/dog.8518.jpg \n inflating: train/dog.8519.jpg \n inflating: train/dog.852.jpg \n inflating: train/dog.8520.jpg \n inflating: train/dog.8521.jpg \n inflating: train/dog.8522.jpg \n inflating: train/dog.8523.jpg \n inflating: train/dog.8524.jpg \n inflating: train/dog.8525.jpg \n inflating: train/dog.8526.jpg \n inflating: train/dog.8527.jpg \n inflating: train/dog.8528.jpg \n inflating: train/dog.8529.jpg \n inflating: train/dog.853.jpg \n inflating: train/dog.8530.jpg \n inflating: train/dog.8531.jpg \n inflating: train/dog.8532.jpg \n inflating: train/dog.8533.jpg \n inflating: train/dog.8534.jpg \n inflating: train/dog.8535.jpg \n inflating: train/dog.8536.jpg \n inflating: train/dog.8537.jpg \n inflating: train/dog.8538.jpg \n inflating: train/dog.8539.jpg \n inflating: train/dog.854.jpg \n inflating: train/dog.8540.jpg \n inflating: train/dog.8541.jpg \n inflating: train/dog.8542.jpg \n inflating: train/dog.8543.jpg \n inflating: train/dog.8544.jpg \n inflating: train/dog.8545.jpg \n inflating: train/dog.8546.jpg \n inflating: train/dog.8547.jpg \n inflating: train/dog.8548.jpg \n inflating: train/dog.8549.jpg \n inflating: train/dog.855.jpg \n inflating: train/dog.8550.jpg \n inflating: train/dog.8551.jpg \n inflating: train/dog.8552.jpg \n inflating: train/dog.8553.jpg \n inflating: train/dog.8554.jpg \n inflating: train/dog.8555.jpg \n inflating: train/dog.8556.jpg \n inflating: train/dog.8557.jpg \n inflating: train/dog.8558.jpg \n inflating: train/dog.8559.jpg \n inflating: train/dog.856.jpg \n inflating: train/dog.8560.jpg \n inflating: train/dog.8561.jpg \n inflating: train/dog.8562.jpg \n inflating: train/dog.8563.jpg \n inflating: train/dog.8564.jpg \n inflating: train/dog.8565.jpg \n inflating: train/dog.8566.jpg \n inflating: train/dog.8567.jpg \n inflating: train/dog.8568.jpg \n inflating: train/dog.8569.jpg \n inflating: train/dog.857.jpg \n inflating: train/dog.8570.jpg \n inflating: train/dog.8571.jpg \n inflating: train/dog.8572.jpg \n inflating: train/dog.8573.jpg \n inflating: train/dog.8574.jpg \n inflating: train/dog.8575.jpg \n inflating: train/dog.8576.jpg \n inflating: train/dog.8577.jpg \n inflating: train/dog.8578.jpg \n inflating: train/dog.8579.jpg \n inflating: train/dog.858.jpg \n inflating: train/dog.8580.jpg \n inflating: train/dog.8581.jpg \n inflating: train/dog.8582.jpg \n inflating: train/dog.8583.jpg \n inflating: train/dog.8584.jpg \n inflating: train/dog.8585.jpg \n inflating: train/dog.8586.jpg \n inflating: train/dog.8587.jpg \n inflating: train/dog.8588.jpg \n inflating: train/dog.8589.jpg \n inflating: train/dog.859.jpg \n inflating: train/dog.8590.jpg \n inflating: train/dog.8591.jpg \n inflating: train/dog.8592.jpg \n inflating: train/dog.8593.jpg \n inflating: train/dog.8594.jpg \n inflating: train/dog.8595.jpg \n inflating: train/dog.8596.jpg \n inflating: train/dog.8597.jpg \n inflating: train/dog.8598.jpg \n inflating: train/dog.8599.jpg \n inflating: train/dog.86.jpg \n inflating: train/dog.860.jpg \n inflating: train/dog.8600.jpg \n inflating: train/dog.8601.jpg \n inflating: train/dog.8602.jpg \n inflating: train/dog.8603.jpg \n inflating: train/dog.8604.jpg \n inflating: train/dog.8605.jpg \n inflating: train/dog.8606.jpg \n inflating: train/dog.8607.jpg \n inflating: train/dog.8608.jpg \n inflating: train/dog.8609.jpg \n inflating: train/dog.861.jpg \n inflating: train/dog.8610.jpg \n inflating: train/dog.8611.jpg \n inflating: train/dog.8612.jpg \n inflating: train/dog.8613.jpg \n inflating: train/dog.8614.jpg \n inflating: train/dog.8615.jpg \n inflating: train/dog.8616.jpg \n inflating: train/dog.8617.jpg \n inflating: train/dog.8618.jpg \n inflating: train/dog.8619.jpg \n inflating: train/dog.862.jpg \n inflating: train/dog.8620.jpg \n inflating: train/dog.8621.jpg \n inflating: train/dog.8622.jpg \n inflating: train/dog.8623.jpg \n inflating: train/dog.8624.jpg \n inflating: train/dog.8625.jpg \n inflating: train/dog.8626.jpg \n inflating: train/dog.8627.jpg \n inflating: train/dog.8628.jpg \n inflating: train/dog.8629.jpg \n inflating: train/dog.863.jpg \n inflating: train/dog.8630.jpg \n inflating: train/dog.8631.jpg \n inflating: train/dog.8632.jpg \n inflating: train/dog.8633.jpg \n inflating: train/dog.8634.jpg \n inflating: train/dog.8635.jpg \n inflating: train/dog.8636.jpg \n inflating: train/dog.8637.jpg \n inflating: train/dog.8638.jpg \n inflating: train/dog.8639.jpg \n inflating: train/dog.864.jpg \n inflating: train/dog.8640.jpg \n inflating: train/dog.8641.jpg \n inflating: train/dog.8642.jpg \n inflating: train/dog.8643.jpg \n inflating: train/dog.8644.jpg \n inflating: train/dog.8645.jpg \n inflating: train/dog.8646.jpg \n inflating: train/dog.8647.jpg \n inflating: train/dog.8648.jpg \n inflating: train/dog.8649.jpg \n inflating: train/dog.865.jpg \n inflating: train/dog.8650.jpg \n inflating: train/dog.8651.jpg \n inflating: train/dog.8652.jpg \n inflating: train/dog.8653.jpg \n inflating: train/dog.8654.jpg \n inflating: train/dog.8655.jpg \n inflating: train/dog.8656.jpg \n inflating: train/dog.8657.jpg \n inflating: train/dog.8658.jpg \n inflating: train/dog.8659.jpg \n inflating: train/dog.866.jpg \n inflating: train/dog.8660.jpg \n inflating: train/dog.8661.jpg \n inflating: train/dog.8662.jpg \n inflating: train/dog.8663.jpg \n inflating: train/dog.8664.jpg \n inflating: train/dog.8665.jpg \n inflating: train/dog.8666.jpg \n inflating: train/dog.8667.jpg \n inflating: train/dog.8668.jpg \n inflating: train/dog.8669.jpg \n inflating: train/dog.867.jpg \n inflating: train/dog.8670.jpg \n inflating: train/dog.8671.jpg \n inflating: train/dog.8672.jpg \n inflating: train/dog.8673.jpg \n inflating: train/dog.8674.jpg \n inflating: train/dog.8675.jpg \n inflating: train/dog.8676.jpg \n inflating: train/dog.8677.jpg \n inflating: train/dog.8678.jpg \n inflating: train/dog.8679.jpg \n inflating: train/dog.868.jpg \n inflating: train/dog.8680.jpg \n inflating: train/dog.8681.jpg \n inflating: train/dog.8682.jpg \n inflating: train/dog.8683.jpg \n inflating: train/dog.8684.jpg \n inflating: train/dog.8685.jpg \n inflating: train/dog.8686.jpg \n inflating: train/dog.8687.jpg \n inflating: train/dog.8688.jpg \n inflating: train/dog.8689.jpg \n inflating: train/dog.869.jpg \n inflating: train/dog.8690.jpg \n inflating: train/dog.8691.jpg \n inflating: train/dog.8692.jpg \n inflating: train/dog.8693.jpg \n inflating: train/dog.8694.jpg \n inflating: train/dog.8695.jpg \n inflating: train/dog.8696.jpg \n inflating: train/dog.8697.jpg \n inflating: train/dog.8698.jpg \n inflating: train/dog.8699.jpg \n inflating: train/dog.87.jpg \n inflating: train/dog.870.jpg \n inflating: train/dog.8700.jpg \n inflating: train/dog.8701.jpg \n inflating: train/dog.8702.jpg \n inflating: train/dog.8703.jpg \n inflating: train/dog.8704.jpg \n inflating: train/dog.8705.jpg \n inflating: train/dog.8706.jpg \n inflating: train/dog.8707.jpg \n inflating: train/dog.8708.jpg \n inflating: train/dog.8709.jpg \n inflating: train/dog.871.jpg \n inflating: train/dog.8710.jpg \n inflating: train/dog.8711.jpg \n inflating: train/dog.8712.jpg \n inflating: train/dog.8713.jpg \n inflating: train/dog.8714.jpg \n inflating: train/dog.8715.jpg \n inflating: train/dog.8716.jpg \n inflating: train/dog.8717.jpg \n inflating: train/dog.8718.jpg \n inflating: train/dog.8719.jpg \n inflating: train/dog.872.jpg \n inflating: train/dog.8720.jpg \n inflating: train/dog.8721.jpg \n inflating: train/dog.8722.jpg \n inflating: train/dog.8723.jpg \n inflating: train/dog.8724.jpg \n inflating: train/dog.8725.jpg \n inflating: train/dog.8726.jpg \n inflating: train/dog.8727.jpg \n inflating: train/dog.8728.jpg \n inflating: train/dog.8729.jpg \n inflating: train/dog.873.jpg \n inflating: train/dog.8730.jpg \n inflating: train/dog.8731.jpg \n inflating: train/dog.8732.jpg \n inflating: train/dog.8733.jpg \n inflating: train/dog.8734.jpg \n inflating: train/dog.8735.jpg \n inflating: train/dog.8736.jpg \n inflating: train/dog.8737.jpg \n inflating: train/dog.8738.jpg \n inflating: train/dog.8739.jpg \n inflating: train/dog.874.jpg \n inflating: train/dog.8740.jpg \n inflating: train/dog.8741.jpg \n inflating: train/dog.8742.jpg \n inflating: train/dog.8743.jpg \n inflating: train/dog.8744.jpg \n inflating: train/dog.8745.jpg \n inflating: train/dog.8746.jpg \n inflating: train/dog.8747.jpg \n inflating: train/dog.8748.jpg \n inflating: train/dog.8749.jpg \n inflating: train/dog.875.jpg \n inflating: train/dog.8750.jpg \n inflating: train/dog.8751.jpg \n inflating: train/dog.8752.jpg \n inflating: train/dog.8753.jpg \n inflating: train/dog.8754.jpg \n inflating: train/dog.8755.jpg \n inflating: train/dog.8756.jpg \n inflating: train/dog.8757.jpg \n inflating: train/dog.8758.jpg \n inflating: train/dog.8759.jpg \n inflating: train/dog.876.jpg \n inflating: train/dog.8760.jpg \n inflating: train/dog.8761.jpg \n inflating: train/dog.8762.jpg \n inflating: train/dog.8763.jpg \n inflating: train/dog.8764.jpg \n inflating: train/dog.8765.jpg \n inflating: train/dog.8766.jpg \n inflating: train/dog.8767.jpg \n inflating: train/dog.8768.jpg \n inflating: train/dog.8769.jpg \n inflating: train/dog.877.jpg \n inflating: train/dog.8770.jpg \n inflating: train/dog.8771.jpg \n inflating: train/dog.8772.jpg \n inflating: train/dog.8773.jpg \n inflating: train/dog.8774.jpg \n inflating: train/dog.8775.jpg \n inflating: train/dog.8776.jpg \n inflating: train/dog.8777.jpg \n inflating: train/dog.8778.jpg \n inflating: train/dog.8779.jpg \n inflating: train/dog.878.jpg \n inflating: train/dog.8780.jpg \n inflating: train/dog.8781.jpg \n inflating: train/dog.8782.jpg \n inflating: train/dog.8783.jpg \n inflating: train/dog.8784.jpg \n inflating: train/dog.8785.jpg \n inflating: train/dog.8786.jpg \n inflating: train/dog.8787.jpg \n inflating: train/dog.8788.jpg \n inflating: train/dog.8789.jpg \n inflating: train/dog.879.jpg \n inflating: train/dog.8790.jpg \n inflating: train/dog.8791.jpg \n inflating: train/dog.8792.jpg \n inflating: train/dog.8793.jpg \n inflating: train/dog.8794.jpg \n inflating: train/dog.8795.jpg \n inflating: train/dog.8796.jpg \n inflating: train/dog.8797.jpg \n inflating: train/dog.8798.jpg \n inflating: train/dog.8799.jpg \n inflating: train/dog.88.jpg \n inflating: train/dog.880.jpg \n inflating: train/dog.8800.jpg \n inflating: train/dog.8801.jpg \n inflating: train/dog.8802.jpg \n inflating: train/dog.8803.jpg \n inflating: train/dog.8804.jpg \n inflating: train/dog.8805.jpg \n inflating: train/dog.8806.jpg \n inflating: train/dog.8807.jpg \n inflating: train/dog.8808.jpg \n inflating: train/dog.8809.jpg \n inflating: train/dog.881.jpg \n inflating: train/dog.8810.jpg \n inflating: train/dog.8811.jpg \n inflating: train/dog.8812.jpg \n inflating: train/dog.8813.jpg \n inflating: train/dog.8814.jpg \n inflating: train/dog.8815.jpg \n inflating: train/dog.8816.jpg \n inflating: train/dog.8817.jpg \n inflating: train/dog.8818.jpg \n inflating: train/dog.8819.jpg \n inflating: train/dog.882.jpg \n inflating: train/dog.8820.jpg \n inflating: train/dog.8821.jpg \n inflating: train/dog.8822.jpg \n inflating: train/dog.8823.jpg \n inflating: train/dog.8824.jpg \n inflating: train/dog.8825.jpg \n inflating: train/dog.8826.jpg \n inflating: train/dog.8827.jpg \n inflating: train/dog.8828.jpg \n inflating: train/dog.8829.jpg \n inflating: train/dog.883.jpg \n inflating: train/dog.8830.jpg \n inflating: train/dog.8831.jpg \n inflating: train/dog.8832.jpg \n inflating: train/dog.8833.jpg \n inflating: train/dog.8834.jpg \n inflating: train/dog.8835.jpg \n inflating: train/dog.8836.jpg \n inflating: train/dog.8837.jpg \n inflating: train/dog.8838.jpg \n inflating: train/dog.8839.jpg \n inflating: train/dog.884.jpg \n inflating: train/dog.8840.jpg \n inflating: train/dog.8841.jpg \n inflating: train/dog.8842.jpg \n inflating: train/dog.8843.jpg \n inflating: train/dog.8844.jpg \n inflating: train/dog.8845.jpg \n inflating: train/dog.8846.jpg \n inflating: train/dog.8847.jpg \n inflating: train/dog.8848.jpg \n inflating: train/dog.8849.jpg \n inflating: train/dog.885.jpg \n inflating: train/dog.8850.jpg \n inflating: train/dog.8851.jpg \n inflating: train/dog.8852.jpg \n inflating: train/dog.8853.jpg \n inflating: train/dog.8854.jpg \n inflating: train/dog.8855.jpg \n inflating: train/dog.8856.jpg \n inflating: train/dog.8857.jpg \n inflating: train/dog.8858.jpg \n inflating: train/dog.8859.jpg \n inflating: train/dog.886.jpg \n inflating: train/dog.8860.jpg \n inflating: train/dog.8861.jpg \n inflating: train/dog.8862.jpg \n inflating: train/dog.8863.jpg \n inflating: train/dog.8864.jpg \n inflating: train/dog.8865.jpg \n inflating: train/dog.8866.jpg \n inflating: train/dog.8867.jpg \n inflating: train/dog.8868.jpg \n inflating: train/dog.8869.jpg \n inflating: train/dog.887.jpg \n inflating: train/dog.8870.jpg \n inflating: train/dog.8871.jpg \n inflating: train/dog.8872.jpg \n inflating: train/dog.8873.jpg \n inflating: train/dog.8874.jpg \n inflating: train/dog.8875.jpg \n inflating: train/dog.8876.jpg \n inflating: train/dog.8877.jpg \n inflating: train/dog.8878.jpg \n inflating: train/dog.8879.jpg \n inflating: train/dog.888.jpg \n inflating: train/dog.8880.jpg \n inflating: train/dog.8881.jpg \n inflating: train/dog.8882.jpg \n inflating: train/dog.8883.jpg \n inflating: train/dog.8884.jpg \n inflating: train/dog.8885.jpg \n inflating: train/dog.8886.jpg \n inflating: train/dog.8887.jpg \n inflating: train/dog.8888.jpg \n inflating: train/dog.8889.jpg \n inflating: train/dog.889.jpg \n inflating: train/dog.8890.jpg \n inflating: train/dog.8891.jpg \n inflating: train/dog.8892.jpg \n inflating: train/dog.8893.jpg \n inflating: train/dog.8894.jpg \n inflating: train/dog.8895.jpg \n inflating: train/dog.8896.jpg \n inflating: train/dog.8897.jpg \n inflating: train/dog.8898.jpg \n inflating: train/dog.8899.jpg \n inflating: train/dog.89.jpg \n inflating: train/dog.890.jpg \n inflating: train/dog.8900.jpg \n inflating: train/dog.8901.jpg \n inflating: train/dog.8902.jpg \n inflating: train/dog.8903.jpg \n inflating: train/dog.8904.jpg \n inflating: train/dog.8905.jpg \n inflating: train/dog.8906.jpg \n inflating: train/dog.8907.jpg \n inflating: train/dog.8908.jpg \n inflating: train/dog.8909.jpg \n inflating: train/dog.891.jpg \n inflating: train/dog.8910.jpg \n inflating: train/dog.8911.jpg \n inflating: train/dog.8912.jpg \n inflating: train/dog.8913.jpg \n inflating: train/dog.8914.jpg \n inflating: train/dog.8915.jpg \n inflating: train/dog.8916.jpg \n inflating: train/dog.8917.jpg \n inflating: train/dog.8918.jpg \n inflating: train/dog.8919.jpg \n inflating: train/dog.892.jpg \n inflating: train/dog.8920.jpg \n inflating: train/dog.8921.jpg \n inflating: train/dog.8922.jpg \n inflating: train/dog.8923.jpg \n inflating: train/dog.8924.jpg \n inflating: train/dog.8925.jpg \n inflating: train/dog.8926.jpg \n inflating: train/dog.8927.jpg \n inflating: train/dog.8928.jpg \n inflating: train/dog.8929.jpg \n inflating: train/dog.893.jpg \n inflating: train/dog.8930.jpg \n inflating: train/dog.8931.jpg \n inflating: train/dog.8932.jpg \n inflating: train/dog.8933.jpg \n inflating: train/dog.8934.jpg \n inflating: train/dog.8935.jpg \n inflating: train/dog.8936.jpg \n inflating: train/dog.8937.jpg \n inflating: train/dog.8938.jpg \n inflating: train/dog.8939.jpg \n inflating: train/dog.894.jpg \n inflating: train/dog.8940.jpg \n inflating: train/dog.8941.jpg \n inflating: train/dog.8942.jpg \n inflating: train/dog.8943.jpg \n inflating: train/dog.8944.jpg \n inflating: train/dog.8945.jpg \n inflating: train/dog.8946.jpg \n inflating: train/dog.8947.jpg \n inflating: train/dog.8948.jpg \n inflating: train/dog.8949.jpg \n inflating: train/dog.895.jpg \n inflating: train/dog.8950.jpg \n inflating: train/dog.8951.jpg \n inflating: train/dog.8952.jpg \n inflating: train/dog.8953.jpg \n inflating: train/dog.8954.jpg \n inflating: train/dog.8955.jpg \n inflating: train/dog.8956.jpg \n inflating: train/dog.8957.jpg \n inflating: train/dog.8958.jpg \n inflating: train/dog.8959.jpg \n inflating: train/dog.896.jpg \n inflating: train/dog.8960.jpg \n inflating: train/dog.8961.jpg \n inflating: train/dog.8962.jpg \n inflating: train/dog.8963.jpg \n inflating: train/dog.8964.jpg \n inflating: train/dog.8965.jpg \n inflating: train/dog.8966.jpg \n inflating: train/dog.8967.jpg \n inflating: train/dog.8968.jpg \n inflating: train/dog.8969.jpg \n inflating: train/dog.897.jpg \n inflating: train/dog.8970.jpg \n inflating: train/dog.8971.jpg \n inflating: train/dog.8972.jpg \n inflating: train/dog.8973.jpg \n inflating: train/dog.8974.jpg \n inflating: train/dog.8975.jpg \n inflating: train/dog.8976.jpg \n inflating: train/dog.8977.jpg \n inflating: train/dog.8978.jpg \n inflating: train/dog.8979.jpg \n inflating: train/dog.898.jpg \n inflating: train/dog.8980.jpg \n inflating: train/dog.8981.jpg \n inflating: train/dog.8982.jpg \n inflating: train/dog.8983.jpg \n inflating: train/dog.8984.jpg \n inflating: train/dog.8985.jpg \n inflating: train/dog.8986.jpg \n inflating: train/dog.8987.jpg \n inflating: train/dog.8988.jpg \n inflating: train/dog.8989.jpg \n inflating: train/dog.899.jpg \n inflating: train/dog.8990.jpg \n inflating: train/dog.8991.jpg \n inflating: train/dog.8992.jpg \n inflating: train/dog.8993.jpg \n inflating: train/dog.8994.jpg \n inflating: train/dog.8995.jpg \n inflating: train/dog.8996.jpg \n inflating: train/dog.8997.jpg \n inflating: train/dog.8998.jpg \n inflating: train/dog.8999.jpg \n inflating: train/dog.9.jpg \n inflating: train/dog.90.jpg \n inflating: train/dog.900.jpg \n inflating: train/dog.9000.jpg \n inflating: train/dog.9001.jpg \n inflating: train/dog.9002.jpg \n inflating: train/dog.9003.jpg \n inflating: train/dog.9004.jpg \n inflating: train/dog.9005.jpg \n inflating: train/dog.9006.jpg \n inflating: train/dog.9007.jpg \n inflating: train/dog.9008.jpg \n inflating: train/dog.9009.jpg \n inflating: train/dog.901.jpg \n inflating: train/dog.9010.jpg \n inflating: train/dog.9011.jpg \n inflating: train/dog.9012.jpg \n inflating: train/dog.9013.jpg \n inflating: train/dog.9014.jpg \n inflating: train/dog.9015.jpg \n inflating: train/dog.9016.jpg \n inflating: train/dog.9017.jpg \n inflating: train/dog.9018.jpg \n inflating: train/dog.9019.jpg \n inflating: train/dog.902.jpg \n inflating: train/dog.9020.jpg \n inflating: train/dog.9021.jpg \n inflating: train/dog.9022.jpg \n inflating: train/dog.9023.jpg \n inflating: train/dog.9024.jpg \n inflating: train/dog.9025.jpg \n inflating: train/dog.9026.jpg \n inflating: train/dog.9027.jpg \n inflating: train/dog.9028.jpg \n inflating: train/dog.9029.jpg \n inflating: train/dog.903.jpg \n inflating: train/dog.9030.jpg \n inflating: train/dog.9031.jpg \n inflating: train/dog.9032.jpg \n inflating: train/dog.9033.jpg \n inflating: train/dog.9034.jpg \n inflating: train/dog.9035.jpg \n inflating: train/dog.9036.jpg \n inflating: train/dog.9037.jpg \n inflating: train/dog.9038.jpg \n inflating: train/dog.9039.jpg \n inflating: train/dog.904.jpg \n inflating: train/dog.9040.jpg \n inflating: train/dog.9041.jpg \n inflating: train/dog.9042.jpg \n inflating: train/dog.9043.jpg \n inflating: train/dog.9044.jpg \n inflating: train/dog.9045.jpg \n inflating: train/dog.9046.jpg \n inflating: train/dog.9047.jpg \n inflating: train/dog.9048.jpg \n inflating: train/dog.9049.jpg \n inflating: train/dog.905.jpg \n inflating: train/dog.9050.jpg \n inflating: train/dog.9051.jpg \n inflating: train/dog.9052.jpg \n inflating: train/dog.9053.jpg \n inflating: train/dog.9054.jpg \n inflating: train/dog.9055.jpg \n inflating: train/dog.9056.jpg \n inflating: train/dog.9057.jpg \n inflating: train/dog.9058.jpg \n inflating: train/dog.9059.jpg \n inflating: train/dog.906.jpg \n inflating: train/dog.9060.jpg \n inflating: train/dog.9061.jpg \n inflating: train/dog.9062.jpg \n inflating: train/dog.9063.jpg \n inflating: train/dog.9064.jpg \n inflating: train/dog.9065.jpg \n inflating: train/dog.9066.jpg \n inflating: train/dog.9067.jpg \n inflating: train/dog.9068.jpg \n inflating: train/dog.9069.jpg \n inflating: train/dog.907.jpg \n inflating: train/dog.9070.jpg \n inflating: train/dog.9071.jpg \n inflating: train/dog.9072.jpg \n inflating: train/dog.9073.jpg \n inflating: train/dog.9074.jpg \n inflating: train/dog.9075.jpg \n inflating: train/dog.9076.jpg \n inflating: train/dog.9077.jpg \n inflating: train/dog.9078.jpg \n inflating: train/dog.9079.jpg \n inflating: train/dog.908.jpg \n inflating: train/dog.9080.jpg \n inflating: train/dog.9081.jpg \n inflating: train/dog.9082.jpg \n inflating: train/dog.9083.jpg \n inflating: train/dog.9084.jpg \n inflating: train/dog.9085.jpg \n inflating: train/dog.9086.jpg \n inflating: train/dog.9087.jpg \n inflating: train/dog.9088.jpg \n inflating: train/dog.9089.jpg \n inflating: train/dog.909.jpg \n inflating: train/dog.9090.jpg \n inflating: train/dog.9091.jpg \n inflating: train/dog.9092.jpg \n inflating: train/dog.9093.jpg \n inflating: train/dog.9094.jpg \n inflating: train/dog.9095.jpg \n inflating: train/dog.9096.jpg \n inflating: train/dog.9097.jpg \n inflating: train/dog.9098.jpg \n inflating: train/dog.9099.jpg \n inflating: train/dog.91.jpg \n inflating: train/dog.910.jpg \n inflating: train/dog.9100.jpg \n inflating: train/dog.9101.jpg \n inflating: train/dog.9102.jpg \n inflating: train/dog.9103.jpg \n inflating: train/dog.9104.jpg \n inflating: train/dog.9105.jpg \n inflating: train/dog.9106.jpg \n inflating: train/dog.9107.jpg \n inflating: train/dog.9108.jpg \n inflating: train/dog.9109.jpg \n inflating: train/dog.911.jpg \n inflating: train/dog.9110.jpg \n inflating: train/dog.9111.jpg \n inflating: train/dog.9112.jpg \n inflating: train/dog.9113.jpg \n inflating: train/dog.9114.jpg \n inflating: train/dog.9115.jpg \n inflating: train/dog.9116.jpg \n inflating: train/dog.9117.jpg \n inflating: train/dog.9118.jpg \n inflating: train/dog.9119.jpg \n inflating: train/dog.912.jpg \n inflating: train/dog.9120.jpg \n inflating: train/dog.9121.jpg \n inflating: train/dog.9122.jpg \n inflating: train/dog.9123.jpg \n inflating: train/dog.9124.jpg \n inflating: train/dog.9125.jpg \n inflating: train/dog.9126.jpg \n inflating: train/dog.9127.jpg \n inflating: train/dog.9128.jpg \n inflating: train/dog.9129.jpg \n inflating: train/dog.913.jpg \n inflating: train/dog.9130.jpg \n inflating: train/dog.9131.jpg \n inflating: train/dog.9132.jpg \n inflating: train/dog.9133.jpg \n inflating: train/dog.9134.jpg \n inflating: train/dog.9135.jpg \n inflating: train/dog.9136.jpg \n inflating: train/dog.9137.jpg \n inflating: train/dog.9138.jpg \n inflating: train/dog.9139.jpg \n inflating: train/dog.914.jpg \n inflating: train/dog.9140.jpg \n inflating: train/dog.9141.jpg \n inflating: train/dog.9142.jpg \n inflating: train/dog.9143.jpg \n inflating: train/dog.9144.jpg \n inflating: train/dog.9145.jpg \n inflating: train/dog.9146.jpg \n inflating: train/dog.9147.jpg \n inflating: train/dog.9148.jpg \n inflating: train/dog.9149.jpg \n inflating: train/dog.915.jpg \n inflating: train/dog.9150.jpg \n inflating: train/dog.9151.jpg \n inflating: train/dog.9152.jpg \n inflating: train/dog.9153.jpg \n inflating: train/dog.9154.jpg \n inflating: train/dog.9155.jpg \n inflating: train/dog.9156.jpg \n inflating: train/dog.9157.jpg \n inflating: train/dog.9158.jpg \n inflating: train/dog.9159.jpg \n inflating: train/dog.916.jpg \n inflating: train/dog.9160.jpg \n inflating: train/dog.9161.jpg \n inflating: train/dog.9162.jpg \n inflating: train/dog.9163.jpg \n inflating: train/dog.9164.jpg \n inflating: train/dog.9165.jpg \n inflating: train/dog.9166.jpg \n inflating: train/dog.9167.jpg \n inflating: train/dog.9168.jpg \n inflating: train/dog.9169.jpg \n inflating: train/dog.917.jpg \n inflating: train/dog.9170.jpg \n inflating: train/dog.9171.jpg \n inflating: train/dog.9172.jpg \n inflating: train/dog.9173.jpg \n inflating: train/dog.9174.jpg \n inflating: train/dog.9175.jpg \n inflating: train/dog.9176.jpg \n inflating: train/dog.9177.jpg \n inflating: train/dog.9178.jpg \n inflating: train/dog.9179.jpg \n inflating: train/dog.918.jpg \n inflating: train/dog.9180.jpg \n inflating: train/dog.9181.jpg \n inflating: train/dog.9182.jpg \n inflating: train/dog.9183.jpg \n inflating: train/dog.9184.jpg \n inflating: train/dog.9185.jpg \n inflating: train/dog.9186.jpg \n inflating: train/dog.9187.jpg \n inflating: train/dog.9188.jpg \n inflating: train/dog.9189.jpg \n inflating: train/dog.919.jpg \n inflating: train/dog.9190.jpg \n inflating: train/dog.9191.jpg \n inflating: train/dog.9192.jpg \n inflating: train/dog.9193.jpg \n inflating: train/dog.9194.jpg \n inflating: train/dog.9195.jpg \n inflating: train/dog.9196.jpg \n inflating: train/dog.9197.jpg \n inflating: train/dog.9198.jpg \n inflating: train/dog.9199.jpg \n inflating: train/dog.92.jpg \n inflating: train/dog.920.jpg \n inflating: train/dog.9200.jpg \n inflating: train/dog.9201.jpg \n inflating: train/dog.9202.jpg \n inflating: train/dog.9203.jpg \n inflating: train/dog.9204.jpg \n inflating: train/dog.9205.jpg \n inflating: train/dog.9206.jpg \n inflating: train/dog.9207.jpg \n inflating: train/dog.9208.jpg \n inflating: train/dog.9209.jpg \n inflating: train/dog.921.jpg \n inflating: train/dog.9210.jpg \n inflating: train/dog.9211.jpg \n inflating: train/dog.9212.jpg \n inflating: train/dog.9213.jpg \n inflating: train/dog.9214.jpg \n inflating: train/dog.9215.jpg \n inflating: train/dog.9216.jpg \n inflating: train/dog.9217.jpg \n inflating: train/dog.9218.jpg \n inflating: train/dog.9219.jpg \n inflating: train/dog.922.jpg \n inflating: train/dog.9220.jpg \n inflating: train/dog.9221.jpg \n inflating: train/dog.9222.jpg \n inflating: train/dog.9223.jpg \n inflating: train/dog.9224.jpg \n inflating: train/dog.9225.jpg \n inflating: train/dog.9226.jpg \n inflating: train/dog.9227.jpg \n inflating: train/dog.9228.jpg \n inflating: train/dog.9229.jpg \n inflating: train/dog.923.jpg \n inflating: train/dog.9230.jpg \n inflating: train/dog.9231.jpg \n inflating: train/dog.9232.jpg \n inflating: train/dog.9233.jpg \n inflating: train/dog.9234.jpg \n inflating: train/dog.9235.jpg \n inflating: train/dog.9236.jpg \n inflating: train/dog.9237.jpg \n inflating: train/dog.9238.jpg \n inflating: train/dog.9239.jpg \n inflating: train/dog.924.jpg \n inflating: train/dog.9240.jpg \n inflating: train/dog.9241.jpg \n inflating: train/dog.9242.jpg \n inflating: train/dog.9243.jpg \n inflating: train/dog.9244.jpg \n inflating: train/dog.9245.jpg \n inflating: train/dog.9246.jpg \n inflating: train/dog.9247.jpg \n inflating: train/dog.9248.jpg \n inflating: train/dog.9249.jpg \n inflating: train/dog.925.jpg \n inflating: train/dog.9250.jpg \n inflating: train/dog.9251.jpg \n inflating: train/dog.9252.jpg \n inflating: train/dog.9253.jpg \n inflating: train/dog.9254.jpg \n inflating: train/dog.9255.jpg \n inflating: train/dog.9256.jpg \n inflating: train/dog.9257.jpg \n inflating: train/dog.9258.jpg \n inflating: train/dog.9259.jpg \n inflating: train/dog.926.jpg \n inflating: train/dog.9260.jpg \n inflating: train/dog.9261.jpg \n inflating: train/dog.9262.jpg \n inflating: train/dog.9263.jpg \n inflating: train/dog.9264.jpg \n inflating: train/dog.9265.jpg \n inflating: train/dog.9266.jpg \n inflating: train/dog.9267.jpg \n inflating: train/dog.9268.jpg \n inflating: train/dog.9269.jpg \n inflating: train/dog.927.jpg \n inflating: train/dog.9270.jpg \n inflating: train/dog.9271.jpg \n inflating: train/dog.9272.jpg \n inflating: train/dog.9273.jpg \n inflating: train/dog.9274.jpg \n inflating: train/dog.9275.jpg \n inflating: train/dog.9276.jpg \n inflating: train/dog.9277.jpg \n inflating: train/dog.9278.jpg \n inflating: train/dog.9279.jpg \n inflating: train/dog.928.jpg \n inflating: train/dog.9280.jpg \n inflating: train/dog.9281.jpg \n inflating: train/dog.9282.jpg \n inflating: train/dog.9283.jpg \n inflating: train/dog.9284.jpg \n inflating: train/dog.9285.jpg \n inflating: train/dog.9286.jpg \n inflating: train/dog.9287.jpg \n inflating: train/dog.9288.jpg \n inflating: train/dog.9289.jpg \n inflating: train/dog.929.jpg \n inflating: train/dog.9290.jpg \n inflating: train/dog.9291.jpg \n inflating: train/dog.9292.jpg \n inflating: train/dog.9293.jpg \n inflating: train/dog.9294.jpg \n inflating: train/dog.9295.jpg \n inflating: train/dog.9296.jpg \n inflating: train/dog.9297.jpg \n inflating: train/dog.9298.jpg \n inflating: train/dog.9299.jpg \n inflating: train/dog.93.jpg \n inflating: train/dog.930.jpg \n inflating: train/dog.9300.jpg \n inflating: train/dog.9301.jpg \n inflating: train/dog.9302.jpg \n inflating: train/dog.9303.jpg \n inflating: train/dog.9304.jpg \n inflating: train/dog.9305.jpg \n inflating: train/dog.9306.jpg \n inflating: train/dog.9307.jpg \n inflating: train/dog.9308.jpg \n inflating: train/dog.9309.jpg \n inflating: train/dog.931.jpg \n inflating: train/dog.9310.jpg \n inflating: train/dog.9311.jpg \n inflating: train/dog.9312.jpg \n inflating: train/dog.9313.jpg \n inflating: train/dog.9314.jpg \n inflating: train/dog.9315.jpg \n inflating: train/dog.9316.jpg \n inflating: train/dog.9317.jpg \n inflating: train/dog.9318.jpg \n inflating: train/dog.9319.jpg \n inflating: train/dog.932.jpg \n inflating: train/dog.9320.jpg \n inflating: train/dog.9321.jpg \n inflating: train/dog.9322.jpg \n inflating: train/dog.9323.jpg \n inflating: train/dog.9324.jpg \n inflating: train/dog.9325.jpg \n inflating: train/dog.9326.jpg \n inflating: train/dog.9327.jpg \n inflating: train/dog.9328.jpg \n inflating: train/dog.9329.jpg \n inflating: train/dog.933.jpg \n inflating: train/dog.9330.jpg \n inflating: train/dog.9331.jpg \n inflating: train/dog.9332.jpg \n inflating: train/dog.9333.jpg \n inflating: train/dog.9334.jpg \n inflating: train/dog.9335.jpg \n inflating: train/dog.9336.jpg \n inflating: train/dog.9337.jpg \n inflating: train/dog.9338.jpg \n inflating: train/dog.9339.jpg \n inflating: train/dog.934.jpg \n inflating: train/dog.9340.jpg \n inflating: train/dog.9341.jpg \n inflating: train/dog.9342.jpg \n inflating: train/dog.9343.jpg \n inflating: train/dog.9344.jpg \n inflating: train/dog.9345.jpg \n inflating: train/dog.9346.jpg \n inflating: train/dog.9347.jpg \n inflating: train/dog.9348.jpg \n inflating: train/dog.9349.jpg \n inflating: train/dog.935.jpg \n inflating: train/dog.9350.jpg \n inflating: train/dog.9351.jpg \n inflating: train/dog.9352.jpg \n inflating: train/dog.9353.jpg \n inflating: train/dog.9354.jpg \n inflating: train/dog.9355.jpg \n inflating: train/dog.9356.jpg \n inflating: train/dog.9357.jpg \n inflating: train/dog.9358.jpg \n inflating: train/dog.9359.jpg \n inflating: train/dog.936.jpg \n inflating: train/dog.9360.jpg \n inflating: train/dog.9361.jpg \n inflating: train/dog.9362.jpg \n inflating: train/dog.9363.jpg \n inflating: train/dog.9364.jpg \n inflating: train/dog.9365.jpg \n inflating: train/dog.9366.jpg \n inflating: train/dog.9367.jpg \n inflating: train/dog.9368.jpg \n inflating: train/dog.9369.jpg \n inflating: train/dog.937.jpg \n inflating: train/dog.9370.jpg \n inflating: train/dog.9371.jpg \n inflating: train/dog.9372.jpg \n inflating: train/dog.9373.jpg \n inflating: train/dog.9374.jpg \n inflating: train/dog.9375.jpg \n inflating: train/dog.9376.jpg \n inflating: train/dog.9377.jpg \n inflating: train/dog.9378.jpg \n inflating: train/dog.9379.jpg \n inflating: train/dog.938.jpg \n inflating: train/dog.9380.jpg \n inflating: train/dog.9381.jpg \n inflating: train/dog.9382.jpg \n inflating: train/dog.9383.jpg \n inflating: train/dog.9384.jpg \n inflating: train/dog.9385.jpg \n inflating: train/dog.9386.jpg \n inflating: train/dog.9387.jpg \n inflating: train/dog.9388.jpg \n inflating: train/dog.9389.jpg \n inflating: train/dog.939.jpg \n inflating: train/dog.9390.jpg \n inflating: train/dog.9391.jpg \n inflating: train/dog.9392.jpg \n inflating: train/dog.9393.jpg \n inflating: train/dog.9394.jpg \n inflating: train/dog.9395.jpg \n inflating: train/dog.9396.jpg \n inflating: train/dog.9397.jpg \n inflating: train/dog.9398.jpg \n inflating: train/dog.9399.jpg \n inflating: train/dog.94.jpg \n inflating: train/dog.940.jpg \n inflating: train/dog.9400.jpg \n inflating: train/dog.9401.jpg \n inflating: train/dog.9402.jpg \n inflating: train/dog.9403.jpg \n inflating: train/dog.9404.jpg \n inflating: train/dog.9405.jpg \n inflating: train/dog.9406.jpg \n inflating: train/dog.9407.jpg \n inflating: train/dog.9408.jpg \n inflating: train/dog.9409.jpg \n inflating: train/dog.941.jpg \n inflating: train/dog.9410.jpg \n inflating: train/dog.9411.jpg \n inflating: train/dog.9412.jpg \n inflating: train/dog.9413.jpg \n inflating: train/dog.9414.jpg \n inflating: train/dog.9415.jpg \n inflating: train/dog.9416.jpg \n inflating: train/dog.9417.jpg \n inflating: train/dog.9418.jpg \n inflating: train/dog.9419.jpg \n inflating: train/dog.942.jpg \n inflating: train/dog.9420.jpg \n inflating: train/dog.9421.jpg \n inflating: train/dog.9422.jpg \n inflating: train/dog.9423.jpg \n inflating: train/dog.9424.jpg \n inflating: train/dog.9425.jpg \n inflating: train/dog.9426.jpg \n inflating: train/dog.9427.jpg \n inflating: train/dog.9428.jpg \n inflating: train/dog.9429.jpg \n inflating: train/dog.943.jpg \n inflating: train/dog.9430.jpg \n inflating: train/dog.9431.jpg \n inflating: train/dog.9432.jpg \n inflating: train/dog.9433.jpg \n inflating: train/dog.9434.jpg \n inflating: train/dog.9435.jpg \n inflating: train/dog.9436.jpg \n inflating: train/dog.9437.jpg \n inflating: train/dog.9438.jpg \n inflating: train/dog.9439.jpg \n inflating: train/dog.944.jpg \n inflating: train/dog.9440.jpg \n inflating: train/dog.9441.jpg \n inflating: train/dog.9442.jpg \n inflating: train/dog.9443.jpg \n inflating: train/dog.9444.jpg \n inflating: train/dog.9445.jpg \n inflating: train/dog.9446.jpg \n inflating: train/dog.9447.jpg \n inflating: train/dog.9448.jpg \n inflating: train/dog.9449.jpg \n inflating: train/dog.945.jpg \n inflating: train/dog.9450.jpg \n inflating: train/dog.9451.jpg \n inflating: train/dog.9452.jpg \n inflating: train/dog.9453.jpg \n inflating: train/dog.9454.jpg \n inflating: train/dog.9455.jpg \n inflating: train/dog.9456.jpg \n inflating: train/dog.9457.jpg \n inflating: train/dog.9458.jpg \n inflating: train/dog.9459.jpg \n inflating: train/dog.946.jpg \n inflating: train/dog.9460.jpg \n inflating: train/dog.9461.jpg \n inflating: train/dog.9462.jpg \n inflating: train/dog.9463.jpg \n inflating: train/dog.9464.jpg \n inflating: train/dog.9465.jpg \n inflating: train/dog.9466.jpg \n inflating: train/dog.9467.jpg \n inflating: train/dog.9468.jpg \n inflating: train/dog.9469.jpg \n inflating: train/dog.947.jpg \n inflating: train/dog.9470.jpg \n inflating: train/dog.9471.jpg \n inflating: train/dog.9472.jpg \n inflating: train/dog.9473.jpg \n inflating: train/dog.9474.jpg \n inflating: train/dog.9475.jpg \n inflating: train/dog.9476.jpg \n inflating: train/dog.9477.jpg \n inflating: train/dog.9478.jpg \n inflating: train/dog.9479.jpg \n inflating: train/dog.948.jpg \n inflating: train/dog.9480.jpg \n inflating: train/dog.9481.jpg \n inflating: train/dog.9482.jpg \n inflating: train/dog.9483.jpg \n inflating: train/dog.9484.jpg \n inflating: train/dog.9485.jpg \n inflating: train/dog.9486.jpg \n inflating: train/dog.9487.jpg \n inflating: train/dog.9488.jpg \n inflating: train/dog.9489.jpg \n inflating: train/dog.949.jpg \n inflating: train/dog.9490.jpg \n inflating: train/dog.9491.jpg \n inflating: train/dog.9492.jpg \n inflating: train/dog.9493.jpg \n inflating: train/dog.9494.jpg \n inflating: train/dog.9495.jpg \n inflating: train/dog.9496.jpg \n inflating: train/dog.9497.jpg \n inflating: train/dog.9498.jpg \n inflating: train/dog.9499.jpg \n inflating: train/dog.95.jpg \n inflating: train/dog.950.jpg \n inflating: train/dog.9500.jpg \n inflating: train/dog.9501.jpg \n inflating: train/dog.9502.jpg \n inflating: train/dog.9503.jpg \n inflating: train/dog.9504.jpg \n inflating: train/dog.9505.jpg \n inflating: train/dog.9506.jpg \n inflating: train/dog.9507.jpg \n inflating: train/dog.9508.jpg \n inflating: train/dog.9509.jpg \n inflating: train/dog.951.jpg \n inflating: train/dog.9510.jpg \n inflating: train/dog.9511.jpg \n inflating: train/dog.9512.jpg \n inflating: train/dog.9513.jpg \n inflating: train/dog.9514.jpg \n inflating: train/dog.9515.jpg \n inflating: train/dog.9516.jpg \n inflating: train/dog.9517.jpg \n inflating: train/dog.9518.jpg \n inflating: train/dog.9519.jpg \n inflating: train/dog.952.jpg \n inflating: train/dog.9520.jpg \n inflating: train/dog.9521.jpg \n inflating: train/dog.9522.jpg \n inflating: train/dog.9523.jpg \n inflating: train/dog.9524.jpg \n inflating: train/dog.9525.jpg \n inflating: train/dog.9526.jpg \n inflating: train/dog.9527.jpg \n inflating: train/dog.9528.jpg \n inflating: train/dog.9529.jpg \n inflating: train/dog.953.jpg \n inflating: train/dog.9530.jpg \n inflating: train/dog.9531.jpg \n inflating: train/dog.9532.jpg \n inflating: train/dog.9533.jpg \n inflating: train/dog.9534.jpg \n inflating: train/dog.9535.jpg \n inflating: train/dog.9536.jpg \n inflating: train/dog.9537.jpg \n inflating: train/dog.9538.jpg \n inflating: train/dog.9539.jpg \n inflating: train/dog.954.jpg \n inflating: train/dog.9540.jpg \n inflating: train/dog.9541.jpg \n inflating: train/dog.9542.jpg \n inflating: train/dog.9543.jpg \n inflating: train/dog.9544.jpg \n inflating: train/dog.9545.jpg \n inflating: train/dog.9546.jpg \n inflating: train/dog.9547.jpg \n inflating: train/dog.9548.jpg \n inflating: train/dog.9549.jpg \n inflating: train/dog.955.jpg \n inflating: train/dog.9550.jpg \n inflating: train/dog.9551.jpg \n inflating: train/dog.9552.jpg \n inflating: train/dog.9553.jpg \n inflating: train/dog.9554.jpg \n inflating: train/dog.9555.jpg \n inflating: train/dog.9556.jpg \n inflating: train/dog.9557.jpg \n inflating: train/dog.9558.jpg \n inflating: train/dog.9559.jpg \n inflating: train/dog.956.jpg \n inflating: train/dog.9560.jpg \n inflating: train/dog.9561.jpg \n inflating: train/dog.9562.jpg \n inflating: train/dog.9563.jpg \n inflating: train/dog.9564.jpg \n inflating: train/dog.9565.jpg \n inflating: train/dog.9566.jpg \n inflating: train/dog.9567.jpg \n inflating: train/dog.9568.jpg \n inflating: train/dog.9569.jpg \n inflating: train/dog.957.jpg \n inflating: train/dog.9570.jpg \n inflating: train/dog.9571.jpg \n inflating: train/dog.9572.jpg \n inflating: train/dog.9573.jpg \n inflating: train/dog.9574.jpg \n inflating: train/dog.9575.jpg \n inflating: train/dog.9576.jpg \n inflating: train/dog.9577.jpg \n inflating: train/dog.9578.jpg \n inflating: train/dog.9579.jpg \n inflating: train/dog.958.jpg \n inflating: train/dog.9580.jpg \n inflating: train/dog.9581.jpg \n inflating: train/dog.9582.jpg \n inflating: train/dog.9583.jpg \n inflating: train/dog.9584.jpg \n inflating: train/dog.9585.jpg \n inflating: train/dog.9586.jpg \n inflating: train/dog.9587.jpg \n inflating: train/dog.9588.jpg \n inflating: train/dog.9589.jpg \n inflating: train/dog.959.jpg \n inflating: train/dog.9590.jpg \n inflating: train/dog.9591.jpg \n inflating: train/dog.9592.jpg \n inflating: train/dog.9593.jpg \n inflating: train/dog.9594.jpg \n inflating: train/dog.9595.jpg \n inflating: train/dog.9596.jpg \n inflating: train/dog.9597.jpg \n inflating: train/dog.9598.jpg \n inflating: train/dog.9599.jpg \n inflating: train/dog.96.jpg \n inflating: train/dog.960.jpg \n inflating: train/dog.9600.jpg \n inflating: train/dog.9601.jpg \n inflating: train/dog.9602.jpg \n inflating: train/dog.9603.jpg \n inflating: train/dog.9604.jpg \n inflating: train/dog.9605.jpg \n inflating: train/dog.9606.jpg \n inflating: train/dog.9607.jpg \n inflating: train/dog.9608.jpg \n inflating: train/dog.9609.jpg \n inflating: train/dog.961.jpg \n inflating: train/dog.9610.jpg \n inflating: train/dog.9611.jpg \n inflating: train/dog.9612.jpg \n inflating: train/dog.9613.jpg \n inflating: train/dog.9614.jpg \n inflating: train/dog.9615.jpg \n inflating: train/dog.9616.jpg \n inflating: train/dog.9617.jpg \n inflating: train/dog.9618.jpg \n inflating: train/dog.9619.jpg \n inflating: train/dog.962.jpg \n inflating: train/dog.9620.jpg \n inflating: train/dog.9621.jpg \n inflating: train/dog.9622.jpg \n inflating: train/dog.9623.jpg \n inflating: train/dog.9624.jpg \n inflating: train/dog.9625.jpg \n inflating: train/dog.9626.jpg \n inflating: train/dog.9627.jpg \n inflating: train/dog.9628.jpg \n inflating: train/dog.9629.jpg \n inflating: train/dog.963.jpg \n inflating: train/dog.9630.jpg \n inflating: train/dog.9631.jpg \n inflating: train/dog.9632.jpg \n inflating: train/dog.9633.jpg \n inflating: train/dog.9634.jpg \n inflating: train/dog.9635.jpg \n inflating: train/dog.9636.jpg \n inflating: train/dog.9637.jpg \n inflating: train/dog.9638.jpg \n inflating: train/dog.9639.jpg \n inflating: train/dog.964.jpg \n inflating: train/dog.9640.jpg \n inflating: train/dog.9641.jpg \n inflating: train/dog.9642.jpg \n inflating: train/dog.9643.jpg \n inflating: train/dog.9644.jpg \n inflating: train/dog.9645.jpg \n inflating: train/dog.9646.jpg \n inflating: train/dog.9647.jpg \n inflating: train/dog.9648.jpg \n inflating: train/dog.9649.jpg \n inflating: train/dog.965.jpg \n inflating: train/dog.9650.jpg \n inflating: train/dog.9651.jpg \n inflating: train/dog.9652.jpg \n inflating: train/dog.9653.jpg \n inflating: train/dog.9654.jpg \n inflating: train/dog.9655.jpg \n inflating: train/dog.9656.jpg \n inflating: train/dog.9657.jpg \n inflating: train/dog.9658.jpg \n inflating: train/dog.9659.jpg \n inflating: train/dog.966.jpg \n inflating: train/dog.9660.jpg \n inflating: train/dog.9661.jpg \n inflating: train/dog.9662.jpg \n inflating: train/dog.9663.jpg \n inflating: train/dog.9664.jpg \n inflating: train/dog.9665.jpg \n inflating: train/dog.9666.jpg \n inflating: train/dog.9667.jpg \n inflating: train/dog.9668.jpg \n inflating: train/dog.9669.jpg \n inflating: train/dog.967.jpg \n inflating: train/dog.9670.jpg \n inflating: train/dog.9671.jpg \n inflating: train/dog.9672.jpg \n inflating: train/dog.9673.jpg \n inflating: train/dog.9674.jpg \n inflating: train/dog.9675.jpg \n inflating: train/dog.9676.jpg \n inflating: train/dog.9677.jpg \n inflating: train/dog.9678.jpg \n inflating: train/dog.9679.jpg \n inflating: train/dog.968.jpg \n inflating: train/dog.9680.jpg \n inflating: train/dog.9681.jpg \n inflating: train/dog.9682.jpg \n inflating: train/dog.9683.jpg \n inflating: train/dog.9684.jpg \n inflating: train/dog.9685.jpg \n inflating: train/dog.9686.jpg \n inflating: train/dog.9687.jpg \n inflating: train/dog.9688.jpg \n inflating: train/dog.9689.jpg \n inflating: train/dog.969.jpg \n inflating: train/dog.9690.jpg \n inflating: train/dog.9691.jpg \n inflating: train/dog.9692.jpg \n inflating: train/dog.9693.jpg \n inflating: train/dog.9694.jpg \n inflating: train/dog.9695.jpg \n inflating: train/dog.9696.jpg \n inflating: train/dog.9697.jpg \n inflating: train/dog.9698.jpg \n inflating: train/dog.9699.jpg \n inflating: train/dog.97.jpg \n inflating: train/dog.970.jpg \n inflating: train/dog.9700.jpg \n inflating: train/dog.9701.jpg \n inflating: train/dog.9702.jpg \n inflating: train/dog.9703.jpg \n inflating: train/dog.9704.jpg \n inflating: train/dog.9705.jpg \n inflating: train/dog.9706.jpg \n inflating: train/dog.9707.jpg \n inflating: train/dog.9708.jpg \n inflating: train/dog.9709.jpg \n inflating: train/dog.971.jpg \n inflating: train/dog.9710.jpg \n inflating: train/dog.9711.jpg \n inflating: train/dog.9712.jpg \n inflating: train/dog.9713.jpg \n inflating: train/dog.9714.jpg \n inflating: train/dog.9715.jpg \n inflating: train/dog.9716.jpg \n inflating: train/dog.9717.jpg \n inflating: train/dog.9718.jpg \n inflating: train/dog.9719.jpg \n inflating: train/dog.972.jpg \n inflating: train/dog.9720.jpg \n inflating: train/dog.9721.jpg \n inflating: train/dog.9722.jpg \n inflating: train/dog.9723.jpg \n inflating: train/dog.9724.jpg \n inflating: train/dog.9725.jpg \n inflating: train/dog.9726.jpg \n inflating: train/dog.9727.jpg \n inflating: train/dog.9728.jpg \n inflating: train/dog.9729.jpg \n inflating: train/dog.973.jpg \n inflating: train/dog.9730.jpg \n inflating: train/dog.9731.jpg \n inflating: train/dog.9732.jpg \n inflating: train/dog.9733.jpg \n inflating: train/dog.9734.jpg \n inflating: train/dog.9735.jpg \n inflating: train/dog.9736.jpg \n inflating: train/dog.9737.jpg \n inflating: train/dog.9738.jpg \n inflating: train/dog.9739.jpg \n inflating: train/dog.974.jpg \n inflating: train/dog.9740.jpg \n inflating: train/dog.9741.jpg \n inflating: train/dog.9742.jpg \n inflating: train/dog.9743.jpg \n inflating: train/dog.9744.jpg \n inflating: train/dog.9745.jpg \n inflating: train/dog.9746.jpg \n inflating: train/dog.9747.jpg \n inflating: train/dog.9748.jpg \n inflating: train/dog.9749.jpg \n inflating: train/dog.975.jpg \n inflating: train/dog.9750.jpg \n inflating: train/dog.9751.jpg \n inflating: train/dog.9752.jpg \n inflating: train/dog.9753.jpg \n inflating: train/dog.9754.jpg \n inflating: train/dog.9755.jpg \n inflating: train/dog.9756.jpg \n inflating: train/dog.9757.jpg \n inflating: train/dog.9758.jpg \n inflating: train/dog.9759.jpg \n inflating: train/dog.976.jpg \n inflating: train/dog.9760.jpg \n inflating: train/dog.9761.jpg \n inflating: train/dog.9762.jpg \n inflating: train/dog.9763.jpg \n inflating: train/dog.9764.jpg \n inflating: train/dog.9765.jpg \n inflating: train/dog.9766.jpg \n inflating: train/dog.9767.jpg \n inflating: train/dog.9768.jpg \n inflating: train/dog.9769.jpg \n inflating: train/dog.977.jpg \n inflating: train/dog.9770.jpg \n inflating: train/dog.9771.jpg \n inflating: train/dog.9772.jpg \n inflating: train/dog.9773.jpg \n inflating: train/dog.9774.jpg \n inflating: train/dog.9775.jpg \n inflating: train/dog.9776.jpg \n inflating: train/dog.9777.jpg \n inflating: train/dog.9778.jpg \n inflating: train/dog.9779.jpg \n inflating: train/dog.978.jpg \n inflating: train/dog.9780.jpg \n inflating: train/dog.9781.jpg \n inflating: train/dog.9782.jpg \n inflating: train/dog.9783.jpg \n inflating: train/dog.9784.jpg \n inflating: train/dog.9785.jpg \n inflating: train/dog.9786.jpg \n inflating: train/dog.9787.jpg \n inflating: train/dog.9788.jpg \n inflating: train/dog.9789.jpg \n inflating: train/dog.979.jpg \n inflating: train/dog.9790.jpg \n inflating: train/dog.9791.jpg \n inflating: train/dog.9792.jpg \n inflating: train/dog.9793.jpg \n inflating: train/dog.9794.jpg \n inflating: train/dog.9795.jpg \n inflating: train/dog.9796.jpg \n inflating: train/dog.9797.jpg \n inflating: train/dog.9798.jpg \n inflating: train/dog.9799.jpg \n inflating: train/dog.98.jpg \n inflating: train/dog.980.jpg \n inflating: train/dog.9800.jpg \n inflating: train/dog.9801.jpg \n inflating: train/dog.9802.jpg \n inflating: train/dog.9803.jpg \n inflating: train/dog.9804.jpg \n inflating: train/dog.9805.jpg \n inflating: train/dog.9806.jpg \n inflating: train/dog.9807.jpg \n inflating: train/dog.9808.jpg \n inflating: train/dog.9809.jpg \n inflating: train/dog.981.jpg \n inflating: train/dog.9810.jpg \n inflating: train/dog.9811.jpg \n inflating: train/dog.9812.jpg \n inflating: train/dog.9813.jpg \n inflating: train/dog.9814.jpg \n inflating: train/dog.9815.jpg \n inflating: train/dog.9816.jpg \n inflating: train/dog.9817.jpg \n inflating: train/dog.9818.jpg \n inflating: train/dog.9819.jpg \n inflating: train/dog.982.jpg \n inflating: train/dog.9820.jpg \n inflating: train/dog.9821.jpg \n inflating: train/dog.9822.jpg \n inflating: train/dog.9823.jpg \n inflating: train/dog.9824.jpg \n inflating: train/dog.9825.jpg \n inflating: train/dog.9826.jpg \n inflating: train/dog.9827.jpg \n inflating: train/dog.9828.jpg \n inflating: train/dog.9829.jpg \n inflating: train/dog.983.jpg \n inflating: train/dog.9830.jpg \n inflating: train/dog.9831.jpg \n inflating: train/dog.9832.jpg \n inflating: train/dog.9833.jpg \n inflating: train/dog.9834.jpg \n inflating: train/dog.9835.jpg \n inflating: train/dog.9836.jpg \n inflating: train/dog.9837.jpg \n inflating: train/dog.9838.jpg \n inflating: train/dog.9839.jpg \n inflating: train/dog.984.jpg \n inflating: train/dog.9840.jpg \n inflating: train/dog.9841.jpg \n inflating: train/dog.9842.jpg \n inflating: train/dog.9843.jpg \n inflating: train/dog.9844.jpg \n inflating: train/dog.9845.jpg \n inflating: train/dog.9846.jpg \n inflating: train/dog.9847.jpg \n inflating: train/dog.9848.jpg \n inflating: train/dog.9849.jpg \n inflating: train/dog.985.jpg \n inflating: train/dog.9850.jpg \n inflating: train/dog.9851.jpg \n inflating: train/dog.9852.jpg \n inflating: train/dog.9853.jpg \n inflating: train/dog.9854.jpg \n inflating: train/dog.9855.jpg \n inflating: train/dog.9856.jpg \n inflating: train/dog.9857.jpg \n inflating: train/dog.9858.jpg \n inflating: train/dog.9859.jpg \n inflating: train/dog.986.jpg \n inflating: train/dog.9860.jpg \n inflating: train/dog.9861.jpg \n inflating: train/dog.9862.jpg \n inflating: train/dog.9863.jpg \n inflating: train/dog.9864.jpg \n inflating: train/dog.9865.jpg \n inflating: train/dog.9866.jpg \n inflating: train/dog.9867.jpg \n inflating: train/dog.9868.jpg \n inflating: train/dog.9869.jpg \n inflating: train/dog.987.jpg \n inflating: train/dog.9870.jpg \n inflating: train/dog.9871.jpg \n inflating: train/dog.9872.jpg \n inflating: train/dog.9873.jpg \n inflating: train/dog.9874.jpg \n inflating: train/dog.9875.jpg \n inflating: train/dog.9876.jpg \n inflating: train/dog.9877.jpg \n inflating: train/dog.9878.jpg \n inflating: train/dog.9879.jpg \n inflating: train/dog.988.jpg \n inflating: train/dog.9880.jpg \n inflating: train/dog.9881.jpg \n inflating: train/dog.9882.jpg \n inflating: train/dog.9883.jpg \n inflating: train/dog.9884.jpg \n inflating: train/dog.9885.jpg \n inflating: train/dog.9886.jpg \n inflating: train/dog.9887.jpg \n inflating: train/dog.9888.jpg \n inflating: train/dog.9889.jpg \n inflating: train/dog.989.jpg \n inflating: train/dog.9890.jpg \n inflating: train/dog.9891.jpg \n inflating: train/dog.9892.jpg \n inflating: train/dog.9893.jpg \n inflating: train/dog.9894.jpg \n inflating: train/dog.9895.jpg \n inflating: train/dog.9896.jpg \n inflating: train/dog.9897.jpg \n inflating: train/dog.9898.jpg \n inflating: train/dog.9899.jpg \n inflating: train/dog.99.jpg \n inflating: train/dog.990.jpg \n inflating: train/dog.9900.jpg \n inflating: train/dog.9901.jpg \n inflating: train/dog.9902.jpg \n inflating: train/dog.9903.jpg \n inflating: train/dog.9904.jpg \n inflating: train/dog.9905.jpg \n inflating: train/dog.9906.jpg \n inflating: train/dog.9907.jpg \n inflating: train/dog.9908.jpg \n inflating: train/dog.9909.jpg \n inflating: train/dog.991.jpg \n inflating: train/dog.9910.jpg \n inflating: train/dog.9911.jpg \n inflating: train/dog.9912.jpg \n inflating: train/dog.9913.jpg \n inflating: train/dog.9914.jpg \n inflating: train/dog.9915.jpg \n inflating: train/dog.9916.jpg \n inflating: train/dog.9917.jpg \n inflating: train/dog.9918.jpg \n inflating: train/dog.9919.jpg \n inflating: train/dog.992.jpg \n inflating: train/dog.9920.jpg \n inflating: train/dog.9921.jpg \n inflating: train/dog.9922.jpg \n inflating: train/dog.9923.jpg \n inflating: train/dog.9924.jpg \n inflating: train/dog.9925.jpg \n inflating: train/dog.9926.jpg \n inflating: train/dog.9927.jpg \n inflating: train/dog.9928.jpg \n inflating: train/dog.9929.jpg \n inflating: train/dog.993.jpg \n inflating: train/dog.9930.jpg \n inflating: train/dog.9931.jpg \n inflating: train/dog.9932.jpg \n inflating: train/dog.9933.jpg \n inflating: train/dog.9934.jpg \n inflating: train/dog.9935.jpg \n inflating: train/dog.9936.jpg \n inflating: train/dog.9937.jpg \n inflating: train/dog.9938.jpg \n inflating: train/dog.9939.jpg \n inflating: train/dog.994.jpg \n inflating: train/dog.9940.jpg \n inflating: train/dog.9941.jpg \n inflating: train/dog.9942.jpg \n inflating: train/dog.9943.jpg \n inflating: train/dog.9944.jpg \n inflating: train/dog.9945.jpg \n inflating: train/dog.9946.jpg \n inflating: train/dog.9947.jpg \n inflating: train/dog.9948.jpg \n inflating: train/dog.9949.jpg \n inflating: train/dog.995.jpg \n inflating: train/dog.9950.jpg \n inflating: train/dog.9951.jpg \n inflating: train/dog.9952.jpg \n inflating: train/dog.9953.jpg \n inflating: train/dog.9954.jpg \n inflating: train/dog.9955.jpg \n inflating: train/dog.9956.jpg \n inflating: train/dog.9957.jpg \n inflating: train/dog.9958.jpg \n inflating: train/dog.9959.jpg \n inflating: train/dog.996.jpg \n inflating: train/dog.9960.jpg \n inflating: train/dog.9961.jpg \n inflating: train/dog.9962.jpg \n inflating: train/dog.9963.jpg \n inflating: train/dog.9964.jpg \n inflating: train/dog.9965.jpg \n inflating: train/dog.9966.jpg \n inflating: train/dog.9967.jpg \n inflating: train/dog.9968.jpg \n inflating: train/dog.9969.jpg \n inflating: train/dog.997.jpg \n inflating: train/dog.9970.jpg \n inflating: train/dog.9971.jpg \n inflating: train/dog.9972.jpg \n inflating: train/dog.9973.jpg \n inflating: train/dog.9974.jpg \n inflating: train/dog.9975.jpg \n inflating: train/dog.9976.jpg \n inflating: train/dog.9977.jpg \n inflating: train/dog.9978.jpg \n inflating: train/dog.9979.jpg \n inflating: train/dog.998.jpg \n inflating: train/dog.9980.jpg \n inflating: train/dog.9981.jpg \n inflating: train/dog.9982.jpg \n inflating: train/dog.9983.jpg \n inflating: train/dog.9984.jpg \n inflating: train/dog.9985.jpg \n inflating: train/dog.9986.jpg \n inflating: train/dog.9987.jpg \n inflating: train/dog.9988.jpg \n inflating: train/dog.9989.jpg \n inflating: train/dog.999.jpg \n inflating: train/dog.9990.jpg \n inflating: train/dog.9991.jpg \n inflating: train/dog.9992.jpg \n inflating: train/dog.9993.jpg \n inflating: train/dog.9994.jpg \n inflating: train/dog.9995.jpg \n inflating: train/dog.9996.jpg \n inflating: train/dog.9997.jpg \n inflating: train/dog.9998.jpg \n inflating: train/dog.9999.jpg \n"
],
[
"import tensorflow as tf\nimport os\n# use gpu/cpu/tpu\n# see details in https://colab.research.google.com/drive/1cpuwjKTJbMjlvZ7opyrWzMXF_NYnjkiE#scrollTo=y3gk7nSvTUFZ\ngpus = tf.config.experimental.list_physical_devices('GPU')\nCOLAB_TPU_ADDR = os.environ.get('COLAB_TPU_ADDR')\nif COLAB_TPU_ADDR:\n resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + COLAB_TPU_ADDR)\n tf.config.experimental_connect_to_cluster(resolver)\n # This is the TPU initialization code that has to be at the beginning.\n tf.tpu.experimental.initialize_tpu_system(resolver)\n strategy = tf.distribute.experimental.TPUStrategy(resolver)\n print('Running on TPU ') \nelif len(gpus) > 1:\n strategy = tf.distribute.MirroredStrategy([gpu.name for gpu in gpus])\n print('Running on multiple GPUs ', [gpu.name for gpu in gpus])\nelif len(gpus) == 1:\n strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU\n print('Running on single GPU ', gpus[0].name)\nelse:\n strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU\n print('Running on CPU')\nprint(\"Number of accelerators: \", strategy.num_replicas_in_sync)\n!nvidia-smi",
"Running on single GPU /physical_device:GPU:0\nNumber of accelerators: 1\nSun May 10 21:12:03 2020 \n+-----------------------------------------------------------------------------+\n| NVIDIA-SMI 440.82 Driver Version: 418.67 CUDA Version: 10.1 |\n|-------------------------------+----------------------+----------------------+\n| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n|===============================+======================+======================|\n| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |\n| N/A 48C P8 10W / 70W | 10MiB / 15079MiB | 0% Default |\n+-------------------------------+----------------------+----------------------+\n \n+-----------------------------------------------------------------------------+\n| Processes: GPU Memory |\n| GPU PID Type Process name Usage |\n|=============================================================================|\n| No running processes found |\n+-----------------------------------------------------------------------------+\n"
],
[
"from google.colab import drive\ndrive.mount('/content/drive')\n\ndef notebook_name():\n import re\n import ipykernel\n import requests\n\n from notebook.notebookapp import list_running_servers\n # kernel_id = re.search('kernel-(.*).json', ipykernel.connect.get_connection_file()).group(1)\n for ss in list_running_servers():\n response = requests.get(f'{ss[\"url\"]}api/sessions',params={'token': ss.get('token', '')})\n return response.json()[0]['name']\n\n\nproject, _, _ = notebook_name().rpartition('.')\n\nimport re \nproject = re.sub('[^-a-zA-Z0-9_]+', '_', project)\n\nworking_dir = f'/content/drive/My Drive/Colab Notebooks/{project}'\nprint(f'Current project: {project}')\nprint(f'Places at: {working_dir}')\n\nimport pathlib\npathlib.Path(working_dir).mkdir(parents=True, exist_ok=True)\n",
"Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n\nEnter your authorization code:\n··········\nMounted at /content/drive\nCurrent project: Copy_20of_20How_much_samples_is_enough_for_transfer_learning_same_steps_per_epoch_InceptionResNetV2\nPlaces at: /content/drive/My Drive/Colab Notebooks/Copy_20of_20How_much_samples_is_enough_for_transfer_learning_same_steps_per_epoch_InceptionResNetV2\n"
],
[
"!pip install wandb -q\n!WANDB_API_KEY=723983b2d42ccd7c5510bbeb0549aa73f1242844\n!export WANDB_API_KEY\n\nimport wandb\nwandb.init(project=project, dir=working_dir, config=config)\nconf = wandb.config",
"\u001b[?25l\r\u001b[K |▎ | 10kB 27.3MB/s eta 0:00:01\r\u001b[K |▌ | 20kB 30.9MB/s eta 0:00:01\r\u001b[K |▊ | 30kB 35.6MB/s eta 0:00:01\r\u001b[K |█ | 40kB 23.3MB/s eta 0:00:01\r\u001b[K |█▏ | 51kB 16.1MB/s eta 0:00:01\r\u001b[K |█▍ | 61kB 13.4MB/s eta 0:00:01\r\u001b[K |█▋ | 71kB 12.4MB/s eta 0:00:01\r\u001b[K |██ | 81kB 11.8MB/s eta 0:00:01\r\u001b[K |██▏ | 92kB 11.8MB/s eta 0:00:01\r\u001b[K |██▍ | 102kB 11.7MB/s eta 0:00:01\r\u001b[K |██▋ | 112kB 11.7MB/s eta 0:00:01\r\u001b[K |██▉ | 122kB 11.7MB/s eta 0:00:01\r\u001b[K |███ | 133kB 11.7MB/s eta 0:00:01\r\u001b[K |███▎ | 143kB 11.7MB/s eta 0:00:01\r\u001b[K |███▌ | 153kB 11.7MB/s eta 0:00:01\r\u001b[K |███▉ | 163kB 11.7MB/s eta 0:00:01\r\u001b[K |████ | 174kB 11.7MB/s eta 0:00:01\r\u001b[K |████▎ | 184kB 11.7MB/s eta 0:00:01\r\u001b[K |████▌ | 194kB 11.7MB/s eta 0:00:01\r\u001b[K |████▊ | 204kB 11.7MB/s eta 0:00:01\r\u001b[K |█████ | 215kB 11.7MB/s eta 0:00:01\r\u001b[K |█████▏ | 225kB 11.7MB/s eta 0:00:01\r\u001b[K |█████▌ | 235kB 11.7MB/s eta 0:00:01\r\u001b[K |█████▊ | 245kB 11.7MB/s eta 0:00:01\r\u001b[K |██████ | 256kB 11.7MB/s eta 0:00:01\r\u001b[K |██████▏ | 266kB 11.7MB/s eta 0:00:01\r\u001b[K |██████▍ | 276kB 11.7MB/s eta 0:00:01\r\u001b[K |██████▋ | 286kB 11.7MB/s eta 0:00:01\r\u001b[K |██████▉ | 296kB 11.7MB/s eta 0:00:01\r\u001b[K |███████ | 307kB 11.7MB/s eta 0:00:01\r\u001b[K |███████▍ | 317kB 11.7MB/s eta 0:00:01\r\u001b[K |███████▋ | 327kB 11.7MB/s eta 0:00:01\r\u001b[K |███████▉ | 337kB 11.7MB/s eta 0:00:01\r\u001b[K |████████ | 348kB 11.7MB/s eta 0:00:01\r\u001b[K |████████▎ | 358kB 11.7MB/s eta 0:00:01\r\u001b[K |████████▌ | 368kB 11.7MB/s eta 0:00:01\r\u001b[K |████████▊ | 378kB 11.7MB/s eta 0:00:01\r\u001b[K |█████████ | 389kB 11.7MB/s eta 0:00:01\r\u001b[K |█████████▎ | 399kB 11.7MB/s eta 0:00:01\r\u001b[K |█████████▌ | 409kB 11.7MB/s eta 0:00:01\r\u001b[K |█████████▊ | 419kB 11.7MB/s eta 0:00:01\r\u001b[K |██████████ | 430kB 11.7MB/s eta 0:00:01\r\u001b[K |██████████▏ | 440kB 11.7MB/s eta 0:00:01\r\u001b[K |██████████▍ | 450kB 11.7MB/s eta 0:00:01\r\u001b[K |██████████▋ | 460kB 11.7MB/s eta 0:00:01\r\u001b[K |███████████ | 471kB 11.7MB/s eta 0:00:01\r\u001b[K |███████████▏ | 481kB 11.7MB/s eta 0:00:01\r\u001b[K |███████████▍ | 491kB 11.7MB/s eta 0:00:01\r\u001b[K |███████████▋ | 501kB 11.7MB/s eta 0:00:01\r\u001b[K |███████████▉ | 512kB 11.7MB/s eta 0:00:01\r\u001b[K |████████████ | 522kB 11.7MB/s eta 0:00:01\r\u001b[K |████████████▎ | 532kB 11.7MB/s eta 0:00:01\r\u001b[K |████████████▋ | 542kB 11.7MB/s eta 0:00:01\r\u001b[K |████████████▉ | 552kB 11.7MB/s eta 0:00:01\r\u001b[K |█████████████ | 563kB 11.7MB/s eta 0:00:01\r\u001b[K |█████████████▎ | 573kB 11.7MB/s eta 0:00:01\r\u001b[K |█████████████▌ | 583kB 11.7MB/s eta 0:00:01\r\u001b[K |█████████████▊ | 593kB 11.7MB/s eta 0:00:01\r\u001b[K |██████████████ | 604kB 11.7MB/s eta 0:00:01\r\u001b[K |██████████████▏ | 614kB 11.7MB/s eta 0:00:01\r\u001b[K |██████████████▌ | 624kB 11.7MB/s eta 0:00:01\r\u001b[K |██████████████▊ | 634kB 11.7MB/s eta 0:00:01\r\u001b[K |███████████████ | 645kB 11.7MB/s eta 0:00:01\r\u001b[K |███████████████▏ | 655kB 11.7MB/s eta 0:00:01\r\u001b[K |███████████████▍ | 665kB 11.7MB/s eta 0:00:01\r\u001b[K |███████████████▋ | 675kB 11.7MB/s eta 0:00:01\r\u001b[K |███████████████▉ | 686kB 11.7MB/s eta 0:00:01\r\u001b[K |████████████████ | 696kB 11.7MB/s eta 0:00:01\r\u001b[K |████████████████▍ | 706kB 11.7MB/s eta 0:00:01\r\u001b[K |████████████████▋ | 716kB 11.7MB/s eta 0:00:01\r\u001b[K |████████████████▉ | 727kB 11.7MB/s eta 0:00:01\r\u001b[K |█████████████████ | 737kB 11.7MB/s eta 0:00:01\r\u001b[K |█████████████████▎ | 747kB 11.7MB/s eta 0:00:01\r\u001b[K |█████████████████▌ | 757kB 11.7MB/s eta 0:00:01\r\u001b[K |█████████████████▊ | 768kB 11.7MB/s eta 0:00:01\r\u001b[K |██████████████████ | 778kB 11.7MB/s eta 0:00:01\r\u001b[K |██████████████████▎ | 788kB 11.7MB/s eta 0:00:01\r\u001b[K |██████████████████▌ | 798kB 11.7MB/s eta 0:00:01\r\u001b[K |██████████████████▊ | 808kB 11.7MB/s eta 0:00:01\r\u001b[K |███████████████████ | 819kB 11.7MB/s eta 0:00:01\r\u001b[K |███████████████████▏ | 829kB 11.7MB/s eta 0:00:01\r\u001b[K |███████████████████▍ | 839kB 11.7MB/s eta 0:00:01\r\u001b[K |███████████████████▋ | 849kB 11.7MB/s eta 0:00:01\r\u001b[K |████████████████████ | 860kB 11.7MB/s eta 0:00:01\r\u001b[K |████████████████████▏ | 870kB 11.7MB/s eta 0:00:01\r\u001b[K |████████████████████▍ | 880kB 11.7MB/s eta 0:00:01\r\u001b[K |████████████████████▋ | 890kB 11.7MB/s eta 0:00:01\r\u001b[K |████████████████████▉ | 901kB 11.7MB/s eta 0:00:01\r\u001b[K |█████████████████████ | 911kB 11.7MB/s eta 0:00:01\r\u001b[K |█████████████████████▎ | 921kB 11.7MB/s eta 0:00:01\r\u001b[K |█████████████████████▋ | 931kB 11.7MB/s eta 0:00:01\r\u001b[K |█████████████████████▉ | 942kB 11.7MB/s eta 0:00:01\r\u001b[K |██████████████████████ | 952kB 11.7MB/s eta 0:00:01\r\u001b[K |██████████████████████▎ | 962kB 11.7MB/s eta 0:00:01\r\u001b[K |██████████████████████▌ | 972kB 11.7MB/s eta 0:00:01\r\u001b[K |██████████████████████▊ | 983kB 11.7MB/s eta 0:00:01\r\u001b[K |███████████████████████ | 993kB 11.7MB/s eta 0:00:01\r\u001b[K |███████████████████████▏ | 1.0MB 11.7MB/s eta 0:00:01\r\u001b[K |███████████████████████▌ | 1.0MB 11.7MB/s eta 0:00:01\r\u001b[K |███████████████████████▊ | 1.0MB 11.7MB/s eta 0:00:01\r\u001b[K |████████████████████████ | 1.0MB 11.7MB/s eta 0:00:01\r\u001b[K |████████████████████████▏ | 1.0MB 11.7MB/s eta 0:00:01\r\u001b[K |████████████████████████▍ | 1.1MB 11.7MB/s eta 0:00:01\r\u001b[K |████████████████████████▋ | 1.1MB 11.7MB/s eta 0:00:01\r\u001b[K |████████████████████████▉ | 1.1MB 11.7MB/s eta 0:00:01\r\u001b[K |█████████████████████████▏ | 1.1MB 11.7MB/s eta 0:00:01\r\u001b[K |█████████████████████████▍ | 1.1MB 11.7MB/s eta 0:00:01\r\u001b[K |█████████████████████████▋ | 1.1MB 11.7MB/s eta 0:00:01\r\u001b[K |█████████████████████████▉ | 1.1MB 11.7MB/s eta 0:00:01\r\u001b[K |██████████████████████████ | 1.1MB 11.7MB/s eta 0:00:01\r\u001b[K |██████████████████████████▎ | 1.1MB 11.7MB/s eta 0:00:01\r\u001b[K |██████████████████████████▌ | 1.1MB 11.7MB/s eta 0:00:01\r\u001b[K |██████████████████████████▊ | 1.2MB 11.7MB/s eta 0:00:01\r\u001b[K |███████████████████████████ | 1.2MB 11.7MB/s eta 0:00:01\r\u001b[K |███████████████████████████▎ | 1.2MB 11.7MB/s eta 0:00:01\r\u001b[K |███████████████████████████▌ | 1.2MB 11.7MB/s eta 0:00:01\r\u001b[K |███████████████████████████▊ | 1.2MB 11.7MB/s eta 0:00:01\r\u001b[K |████████████████████████████ | 1.2MB 11.7MB/s eta 0:00:01\r\u001b[K |████████████████████████████▏ | 1.2MB 11.7MB/s eta 0:00:01\r\u001b[K |████████████████████████████▍ | 1.2MB 11.7MB/s eta 0:00:01\r\u001b[K |████████████████████████████▋ | 1.2MB 11.7MB/s eta 0:00:01\r\u001b[K |█████████████████████████████ | 1.2MB 11.7MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▏ | 1.3MB 11.7MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▍ | 1.3MB 11.7MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▋ | 1.3MB 11.7MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▉ | 1.3MB 11.7MB/s eta 0:00:01\r\u001b[K |██████████████████████████████ | 1.3MB 11.7MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▎ | 1.3MB 11.7MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▋ | 1.3MB 11.7MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▉ | 1.3MB 11.7MB/s eta 0:00:01\r\u001b[K |███████████████████████████████ | 1.3MB 11.7MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▎| 1.4MB 11.7MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▌| 1.4MB 11.7MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▊| 1.4MB 11.7MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 1.4MB 11.7MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 1.4MB 11.7MB/s \n\u001b[K |████████████████████████████████| 460kB 53.5MB/s \n\u001b[K |████████████████████████████████| 112kB 58.0MB/s \n\u001b[K |████████████████████████████████| 102kB 14.3MB/s \n\u001b[K |████████████████████████████████| 102kB 13.3MB/s \n\u001b[K |████████████████████████████████| 71kB 10.5MB/s \n\u001b[K |████████████████████████████████| 71kB 10.7MB/s \n\u001b[?25h Building wheel for watchdog (setup.py) ... \u001b[?25l\u001b[?25hdone\n Building wheel for gql (setup.py) ... \u001b[?25l\u001b[?25hdone\n Building wheel for subprocess32 (setup.py) ... \u001b[?25l\u001b[?25hdone\n Building wheel for pathtools (setup.py) ... \u001b[?25l\u001b[?25hdone\n Building wheel for graphql-core (setup.py) ... \u001b[?25l\u001b[?25hdone\n"
],
[
"import numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\n\nfrom keras.preprocessing.image import ImageDataGenerator, load_img\nfrom keras.utils import to_categorical\nfrom sklearn.model_selection import train_test_split\nimport matplotlib.pyplot as plt\nimport random\n\nimport os\n\ntrain_dir = 'train/'\ntest_dir = 'test1/'\nfilenames = os.listdir(train_dir)\ncategories = []\nfor filename in filenames:\n category = filename.split('.')[0]\n if category == 'dog':\n categories.append(\"1\")\n else:\n categories.append(\"0\")",
"Using TensorFlow backend.\n"
],
[
"from keras.models import Sequential\nfrom keras import layers\nfrom keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense, Activation,GlobalMaxPooling2D\nfrom keras import applications\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom keras import optimizers\nfrom keras.applications import VGG16\nfrom keras.applications import InceptionResNetV2\nfrom keras.models import Model\nfrom keras.callbacks import EarlyStopping\nimport wandb\nfrom wandb.keras import WandbCallback\n\ndef run(samples):\n conf.batch_size = 64, # input batch size for training (default: 64)\n conf.epochs = 20, # number of epochs to train (default: 10)\n conf.lr = 1e-4, # learning rate (default: 0.01)\n conf.momentum = 0.9, # SGD momentum (default: 0.5) \n conf.steps_per_epoch = 20000/64, # shoud be total_train//config.batch_size\n\n df = pd.DataFrame({\n 'filename': filenames,\n 'category': categories\n })\n df.head()\n\n image_size = 224\n input_shape = (image_size, image_size, 3)\n\n epochs = config.epochs\n batch_size = 16\n\n pre_trained_model = InceptionResNetV2(input_shape=input_shape, include_top=False, weights=\"imagenet\")\n pre_trained_model.summary()\n\n for layer in pre_trained_model.layers[:15]:\n layer.trainable = False\n\n for layer in pre_trained_model.layers[15:]:\n layer.trainable = True\n\n last_layer = pre_trained_model.get_layer('conv_7b_ac')\n last_output = last_layer.output\n\n x = GlobalMaxPooling2D()(last_output)\n x = Dense(512, activation='relu')(x)\n x = Dropout(0.5)(x)\n x = layers.Dense(1, activation='sigmoid')(x)\n\n model = Model(pre_trained_model.input, x)\n\n model.compile(loss='binary_crossentropy',\n optimizer=optimizers.SGD(lr=config.lr, momentum=config.momentum),\n metrics=['accuracy'])\n\n train_df, validate_df = train_test_split(df, test_size=0.1)\n train_df = train_df.reset_index()\n validate_df = validate_df.reset_index()\n\n validate_df = validate_df.sample(n=2000).reset_index() # use for fast testing code purpose\n train_df = train_df.sample(n=samples).reset_index() # use for fast testing code purpose\n\n total_train = train_df.shape[0]\n total_validate = validate_df.shape[0]\n\n train_datagen = ImageDataGenerator(\n rotation_range=15,\n rescale=1./255,\n shear_range=0.2,\n zoom_range=0.2,\n horizontal_flip=True,\n fill_mode='nearest',\n width_shift_range=0.1,\n height_shift_range=0.1\n )\n\n train_generator = train_datagen.flow_from_dataframe(\n train_df, \n train_dir, \n x_col='filename',\n y_col='category',\n class_mode='binary',\n target_size=(image_size, image_size),\n batch_size=batch_size\n )\n\n validation_datagen = ImageDataGenerator(rescale=1./255)\n validation_generator = validation_datagen.flow_from_dataframe(\n validate_df, \n train_dir, \n x_col='filename',\n y_col='category',\n class_mode='binary',\n target_size=(image_size, image_size),\n batch_size=batch_size\n )\n\n # fine-tune the model\n history = model.fit_generator(\n train_generator,\n epochs=config.epochs,\n validation_data=validation_generator,\n validation_steps=total_validate//config.batch_size,\n steps_per_epoch=config.steps_per_epoch,\n verbose=2,\n callbacks=[WandbCallback(save_model=True,\n verbose=1)])\n #EarlyStopping(monitor='val_loss'), \n return history\n",
"_____no_output_____"
],
[
"counts = [10, 50, 100]\nresults = []\nresults_count = []\nfor count in counts:\n h = run(count)\n results.append(h.history)\n results_count.append(count)",
"_____no_output_____"
],
[
"val_acc = [(i, max(result['val_accuracy'])) for i, result in zip(results_count, results)]\nacc = [(i, max(result['accuracy'])) for i, result in zip(results_count, results)]\nval_loss = [(i, min(result['val_loss'])) for i, result in zip(results_count, results)]\nloss = [(i, min(result['loss'])) for i, result in zip(results_count, results)]\nimport matplotlib.pyplot as plt\nplt.plot(*zip(*val_acc), '-o', label='val_acc')\nplt.plot(*zip(*acc), '-o', label='acc')\nplt.plot(*zip(*val_loss), '-o', label='val_loss')\nplt.plot(*zip(*loss), '-o', label='loss')\nplt.legend()",
"_____no_output_____"
],
[
"val_acc",
"_____no_output_____"
]
],
[
[
"**500 samples is ok for binary classification**",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0c30b1ba4aedb8ddc6b8997ad63c7487d5468e6 | 31,966 | ipynb | Jupyter Notebook | Exploring-the-neighborhoods-in-Toronto.ipynb | Gabriellavoura/Coursera_Capstone | f485648a11ed3b872a10ab98fe3dd2c975f8c5e8 | [
"MIT"
] | null | null | null | Exploring-the-neighborhoods-in-Toronto.ipynb | Gabriellavoura/Coursera_Capstone | f485648a11ed3b872a10ab98fe3dd2c975f8c5e8 | [
"MIT"
] | null | null | null | Exploring-the-neighborhoods-in-Toronto.ipynb | Gabriellavoura/Coursera_Capstone | f485648a11ed3b872a10ab98fe3dd2c975f8c5e8 | [
"MIT"
] | null | null | null | 31.308521 | 145 | 0.485641 | [
[
[
"# Scraping the Data",
"_____no_output_____"
],
[
"### I used the BeautifulSoup package to transform the data in the table on the Wikipedia page into pandas dataframe",
"_____no_output_____"
]
],
[
[
"import requests\nwebsite_url = requests.get('https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M').text",
"_____no_output_____"
],
[
"from bs4 import BeautifulSoup\nsoup = BeautifulSoup(website_url,'lxml')",
"_____no_output_____"
]
],
[
[
"### We can observe that the tabular data is availabe in table and belongs to class=\"wikitable sortable\", So we need to extract only table",
"_____no_output_____"
]
],
[
[
"Minha_tabela = soup.find('table',{'class':'wikitable sortable'})",
"_____no_output_____"
],
[
"print(Minha_tabela.tr.text)",
"\nPostcode\nBorough\nNeighbourhood\n\n"
],
[
"headers=\"Postcode,Borough,Neighbourhood\"",
"_____no_output_____"
]
],
[
[
"### Geting the values in tr and separate each td within by \",\"\n",
"_____no_output_____"
]
],
[
[
"tabela1=\"\"\nfor tr in Minha_tabela.find_all('tr'):\n row1=\"\"\n for tds in tr.find_all('td'):\n row1=row1+\",\"+tds.text\n tabela1=tabela1+row1[1:]\nprint(tabela1)",
"M1A,Not assigned,Not assigned\nM2A,Not assigned,Not assigned\nM3A,North York,Parkwoods\nM4A,North York,Victoria Village\nM5A,Downtown Toronto,Harbourfront\nM6A,North York,Lawrence Heights\nM6A,North York,Lawrence Manor\nM7A,Downtown Toronto,Queen's Park\nM8A,Not assigned,Not assigned\nM9A,Etobicoke,Islington Avenue\nM1B,Scarborough,Rouge\nM1B,Scarborough,Malvern\nM2B,Not assigned,Not assigned\nM3B,North York,Don Mills North\nM4B,East York,Woodbine Gardens\nM4B,East York,Parkview Hill\nM5B,Downtown Toronto,Ryerson\nM5B,Downtown Toronto,Garden District\nM6B,North York,Glencairn\nM7B,Not assigned,Not assigned\nM8B,Not assigned,Not assigned\nM9B,Etobicoke,Cloverdale\nM9B,Etobicoke,Islington\nM9B,Etobicoke,Martin Grove\nM9B,Etobicoke,Princess Gardens\nM9B,Etobicoke,West Deane Park\nM1C,Scarborough,Highland Creek\nM1C,Scarborough,Rouge Hill\nM1C,Scarborough,Port Union\nM2C,Not assigned,Not assigned\nM3C,North York,Flemingdon Park\nM3C,North York,Don Mills South\nM4C,East York,Woodbine Heights\nM5C,Downtown Toronto,St. James Town\nM6C,York,Humewood-Cedarvale\nM7C,Not assigned,Not assigned\nM8C,Not assigned,Not assigned\nM9C,Etobicoke,Bloordale Gardens\nM9C,Etobicoke,Eringate\nM9C,Etobicoke,Markland Wood\nM9C,Etobicoke,Old Burnhamthorpe\nM1E,Scarborough,Guildwood\nM1E,Scarborough,Morningside\nM1E,Scarborough,West Hill\nM2E,Not assigned,Not assigned\nM3E,Not assigned,Not assigned\nM4E,East Toronto,The Beaches\nM5E,Downtown Toronto,Berczy Park\nM6E,York,Caledonia-Fairbanks\nM7E,Not assigned,Not assigned\nM8E,Not assigned,Not assigned\nM9E,Not assigned,Not assigned\nM1G,Scarborough,Woburn\nM2G,Not assigned,Not assigned\nM3G,Not assigned,Not assigned\nM4G,East York,Leaside\nM5G,Downtown Toronto,Central Bay Street\nM6G,Downtown Toronto,Christie\nM7G,Not assigned,Not assigned\nM8G,Not assigned,Not assigned\nM9G,Not assigned,Not assigned\nM1H,Scarborough,Cedarbrae\nM2H,North York,Hillcrest Village\nM3H,North York,Bathurst Manor\nM3H,North York,Downsview North\nM3H,North York,Wilson Heights\nM4H,East York,Thorncliffe Park\nM5H,Downtown Toronto,Adelaide\nM5H,Downtown Toronto,King\nM5H,Downtown Toronto,Richmond\nM6H,West Toronto,Dovercourt Village\nM6H,West Toronto,Dufferin\nM7H,Not assigned,Not assigned\nM8H,Not assigned,Not assigned\nM9H,Not assigned,Not assigned\nM1J,Scarborough,Scarborough Village\nM2J,North York,Fairview\nM2J,North York,Henry Farm\nM2J,North York,Oriole\nM3J,North York,Northwood Park\nM3J,North York,York University\nM4J,East York,East Toronto\nM5J,Downtown Toronto,Harbourfront East\nM5J,Downtown Toronto,Toronto Islands\nM5J,Downtown Toronto,Union Station\nM6J,West Toronto,Little Portugal\nM6J,West Toronto,Trinity\nM7J,Not assigned,Not assigned\nM8J,Not assigned,Not assigned\nM9J,Not assigned,Not assigned\nM1K,Scarborough,East Birchmount Park\nM1K,Scarborough,Ionview\nM1K,Scarborough,Kennedy Park\nM2K,North York,Bayview Village\nM3K,North York,CFB Toronto\nM3K,North York,Downsview East\nM4K,East Toronto,The Danforth West\nM4K,East Toronto,Riverdale\nM5K,Downtown Toronto,Design Exchange\nM5K,Downtown Toronto,Toronto Dominion Centre\nM6K,West Toronto,Brockton\nM6K,West Toronto,Exhibition Place\nM6K,West Toronto,Parkdale Village\nM7K,Not assigned,Not assigned\nM8K,Not assigned,Not assigned\nM9K,Not assigned,Not assigned\nM1L,Scarborough,Clairlea\nM1L,Scarborough,Golden Mile\nM1L,Scarborough,Oakridge\nM2L,North York,Silver Hills\nM2L,North York,York Mills\nM3L,North York,Downsview West\nM4L,East Toronto,The Beaches West\nM4L,East Toronto,India Bazaar\nM5L,Downtown Toronto,Commerce Court\nM5L,Downtown Toronto,Victoria Hotel\nM6L,North York,Downsview\nM6L,North York,North Park\nM6L,North York,Upwood Park\nM7L,Not assigned,Not assigned\nM8L,Not assigned,Not assigned\nM9L,North York,Humber Summit\nM1M,Scarborough,Cliffcrest\nM1M,Scarborough,Cliffside\nM1M,Scarborough,Scarborough Village West\nM2M,North York,Newtonbrook\nM2M,North York,Willowdale\nM3M,North York,Downsview Central\nM4M,East Toronto,Studio District\nM5M,North York,Bedford Park\nM5M,North York,Lawrence Manor East\nM6M,York,Del Ray\nM6M,York,Keelesdale\nM6M,York,Mount Dennis\nM6M,York,Silverthorn\nM7M,Not assigned,Not assigned\nM8M,Not assigned,Not assigned\nM9M,North York,Emery\nM9M,North York,Humberlea\nM1N,Scarborough,Birch Cliff\nM1N,Scarborough,Cliffside West\nM2N,North York,Willowdale South\nM3N,North York,Downsview Northwest\nM4N,Central Toronto,Lawrence Park\nM5N,Central Toronto,Roselawn\nM6N,York,The Junction North\nM6N,York,Runnymede\nM7N,Not assigned,Not assigned\nM8N,Not assigned,Not assigned\nM9N,York,Weston\nM1P,Scarborough,Dorset Park\nM1P,Scarborough,Scarborough Town Centre\nM1P,Scarborough,Wexford Heights\nM2P,North York,York Mills West\nM3P,Not assigned,Not assigned\nM4P,Central Toronto,Davisville North\nM5P,Central Toronto,Forest Hill North\nM5P,Central Toronto,Forest Hill West\nM6P,West Toronto,High Park\nM6P,West Toronto,The Junction South\nM7P,Not assigned,Not assigned\nM8P,Not assigned,Not assigned\nM9P,Etobicoke,Westmount\nM1R,Scarborough,Maryvale\nM1R,Scarborough,Wexford\nM2R,North York,Willowdale West\nM3R,Not assigned,Not assigned\nM4R,Central Toronto,North Toronto West\nM5R,Central Toronto,The Annex\nM5R,Central Toronto,North Midtown\nM5R,Central Toronto,Yorkville\nM6R,West Toronto,Parkdale\nM6R,West Toronto,Roncesvalles\nM7R,Mississauga,Canada Post Gateway Processing Centre\nM8R,Not assigned,Not assigned\nM9R,Etobicoke,Kingsview Village\nM9R,Etobicoke,Martin Grove Gardens\nM9R,Etobicoke,Richview Gardens\nM9R,Etobicoke,St. Phillips\nM1S,Scarborough,Agincourt\nM2S,Not assigned,Not assigned\nM3S,Not assigned,Not assigned\nM4S,Central Toronto,Davisville\nM5S,Downtown Toronto,Harbord\nM5S,Downtown Toronto,University of Toronto\nM6S,West Toronto,Runnymede\nM6S,West Toronto,Swansea\nM7S,Not assigned,Not assigned\nM8S,Not assigned,Not assigned\nM9S,Not assigned,Not assigned\nM1T,Scarborough,Clarks Corners\nM1T,Scarborough,Sullivan\nM1T,Scarborough,Tam O'Shanter\nM2T,Not assigned,Not assigned\nM3T,Not assigned,Not assigned\nM4T,Central Toronto,Moore Park\nM4T,Central Toronto,Summerhill East\nM5T,Downtown Toronto,Chinatown\nM5T,Downtown Toronto,Grange Park\nM5T,Downtown Toronto,Kensington Market\nM6T,Not assigned,Not assigned\nM7T,Not assigned,Not assigned\nM8T,Not assigned,Not assigned\nM9T,Not assigned,Not assigned\nM1V,Scarborough,Agincourt North\nM1V,Scarborough,L'Amoreaux East\nM1V,Scarborough,Milliken\nM1V,Scarborough,Steeles East\nM2V,Not assigned,Not assigned\nM3V,Not assigned,Not assigned\nM4V,Central Toronto,Deer Park\nM4V,Central Toronto,Forest Hill SE\nM4V,Central Toronto,Rathnelly\nM4V,Central Toronto,South Hill\nM4V,Central Toronto,Summerhill West\nM5V,Downtown Toronto,CN Tower\nM5V,Downtown Toronto,Bathurst Quay\nM5V,Downtown Toronto,Island airport\nM5V,Downtown Toronto,Harbourfront West\nM5V,Downtown Toronto,King and Spadina\nM5V,Downtown Toronto,Railway Lands\nM5V,Downtown Toronto,South Niagara\nM6V,Not assigned,Not assigned\nM7V,Not assigned,Not assigned\nM8V,Etobicoke,Humber Bay Shores\nM8V,Etobicoke,Mimico South\nM8V,Etobicoke,New Toronto\nM9V,Etobicoke,Albion Gardens\nM9V,Etobicoke,Beaumond Heights\nM9V,Etobicoke,Humbergate\nM9V,Etobicoke,Jamestown\nM9V,Etobicoke,Mount Olive\nM9V,Etobicoke,Silverstone\nM9V,Etobicoke,South Steeles\nM9V,Etobicoke,Thistletown\nM1W,Scarborough,L'Amoreaux West\nM2W,Not assigned,Not assigned\nM3W,Not assigned,Not assigned\nM4W,Downtown Toronto,Rosedale\nM5W,Downtown Toronto,Stn A PO Boxes 25 The Esplanade\nM6W,Not assigned,Not assigned\nM7W,Not assigned,Not assigned\nM8W,Etobicoke,Alderwood\nM8W,Etobicoke,Long Branch\nM9W,Etobicoke,Northwest\nM1X,Scarborough,Upper Rouge\nM2X,Not assigned,Not assigned\nM3X,Not assigned,Not assigned\nM4X,Downtown Toronto,Cabbagetown\nM4X,Downtown Toronto,St. James Town\nM5X,Downtown Toronto,First Canadian Place\nM5X,Downtown Toronto,Underground city\nM6X,Not assigned,Not assigned\nM7X,Not assigned,Not assigned\nM8X,Etobicoke,The Kingsway\nM8X,Etobicoke,Montgomery Road\nM8X,Etobicoke,Old Mill North\nM9X,Not assigned,Not assigned\nM1Y,Not assigned,Not assigned\nM2Y,Not assigned,Not assigned\nM3Y,Not assigned,Not assigned\nM4Y,Downtown Toronto,Church and Wellesley\nM5Y,Not assigned,Not assigned\nM6Y,Not assigned,Not assigned\nM7Y,East Toronto,Business Reply Mail Processing Centre 969 Eastern\nM8Y,Etobicoke,Humber Bay\nM8Y,Etobicoke,King's Mill Park\nM8Y,Etobicoke,Kingsway Park South East\nM8Y,Etobicoke,Mimico NE\nM8Y,Etobicoke,Old Mill South\nM8Y,Etobicoke,The Queensway East\nM8Y,Etobicoke,Royal York South East\nM8Y,Etobicoke,Sunnylea\nM9Y,Not assigned,Not assigned\nM1Z,Not assigned,Not assigned\nM2Z,Not assigned,Not assigned\nM3Z,Not assigned,Not assigned\nM4Z,Not assigned,Not assigned\nM5Z,Not assigned,Not assigned\nM6Z,Not assigned,Not assigned\nM7Z,Not assigned,Not assigned\nM8Z,Etobicoke,Kingsway Park South West\nM8Z,Etobicoke,Mimico NW\nM8Z,Etobicoke,The Queensway West\nM8Z,Etobicoke,Royal York South West\nM8Z,Etobicoke,South of Bloor\nM9Z,Not assigned,Not assigned\n\n"
]
],
[
[
"### Store the data in .csv file\n",
"_____no_output_____"
]
],
[
[
"file=open(\"toronto-data.csv\",\"wb\")\nfile.write(bytes(tabela1,encoding=\"ascii\",errors=\"ignore\"))",
"_____no_output_____"
]
],
[
[
"### Converting into dataframe and assigning column names",
"_____no_output_____"
]
],
[
[
"import pandas as pd\ndf = pd.read_csv('toronto-data.csv',header=None)\ndf.columns=[\"Postalcode\",\"Borough\",\"Neighbourhood\"]",
"_____no_output_____"
],
[
"df.head(10)",
"_____no_output_____"
]
],
[
[
"### Drop row where borough is \"Not assigned\"",
"_____no_output_____"
]
],
[
[
"# Get names of indexes for which column Borough has value \"Not assigned\"\nindexNames = df[ df['Borough'] =='Not assigned'].index\n\n# Delete these row indexes from dataFrame\ndf.drop(indexNames , inplace=True)",
"_____no_output_____"
],
[
"df.head(10)",
"_____no_output_____"
]
],
[
[
"### If a cell has a borough but a Not assigned neighborhood, then the neighborhood will be the same as the borough",
"_____no_output_____"
]
],
[
[
"df.loc[df['Neighbourhood'] =='Not assigned' , 'Neighbourhood'] = df['Borough']\ndf.head(10)",
"_____no_output_____"
]
],
[
[
"\n### Rows will be same postalcode will combined into one row with the neighborhoods separated with a comma\n",
"_____no_output_____"
]
],
[
[
"result = df.groupby(['Postalcode','Borough'], sort=False).agg( ', '.join)",
"_____no_output_____"
],
[
"df_new=result.reset_index()\ndf_new.head(10)",
"_____no_output_____"
]
],
[
[
"## Store data in new .csv with clean data.",
"_____no_output_____"
]
],
[
[
"df_new.to_csv(r'Toronto-data-processed.csv', index = False, header = True)",
"_____no_output_____"
]
],
[
[
"#### That's It, Thank you :)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0c320d4e18041192666dab873f123c9b9558e58 | 36,658 | ipynb | Jupyter Notebook | notebooks/Daily and Weekly Returns.ipynb | yantraguru/niftystats | afe59f88735c174be60a01dbd56c40235eb2f997 | [
"Apache-2.0"
] | 1 | 2020-09-05T06:51:37.000Z | 2020-09-05T06:51:37.000Z | notebooks/Daily and Weekly Returns.ipynb | yantraguru/niftystats | afe59f88735c174be60a01dbd56c40235eb2f997 | [
"Apache-2.0"
] | null | null | null | notebooks/Daily and Weekly Returns.ipynb | yantraguru/niftystats | afe59f88735c174be60a01dbd56c40235eb2f997 | [
"Apache-2.0"
] | null | null | null | 113.49226 | 15,028 | 0.867178 | [
[
[
"import numpy as np\nimport pandas as pd\n\nimport datetime\nfrom pandas.tseries.frequencies import to_offset\n\nimport niftyutils\nfrom niftyutils import load_nifty_data\n\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"start_date = datetime.datetime(2005,8,1)\nend_date = datetime.datetime(2020,9,25)\nnifty_data = load_nifty_data(start_date,end_date)",
"_____no_output_____"
]
],
[
[
"## Daily Return Distribution (For 15 years)",
"_____no_output_____"
]
],
[
[
"daily_returns = (nifty_data['Close']/nifty_data['Close'].shift(1) - 1)*100\ndaily_returns = daily_returns.dropna()",
"_____no_output_____"
],
[
"daily_returns.describe()",
"_____no_output_____"
],
[
"plt.figure(figsize=[8,7])\nplt.style.use(\"bmh\")\n\nplt.hist(daily_returns, density = True, bins=20, color='#2ab0ff',alpha=0.55)\n\nplt.xlabel('% return', fontsize=15)\nplt.xticks(fontsize=12)\nplt.yticks(fontsize=12)\n\nplt.tick_params(left = False, bottom = False)\nplt.title('NIFTY daily % returns ({} samples)'.format(len(daily_returns)),fontsize=15)\nplt.grid(False)\nplt.show()",
"_____no_output_____"
],
[
"custom_bins = [daily_returns.min(),-2.5,-2,-1.5,-1,0.-0.75,0.75,1.0,1.5,2.0,2.5,daily_returns.max()]\ncategorized_daily_returns = pd.cut(daily_returns, bins=custom_bins)\ncategorized_daily_returns.value_counts(normalize=True,sort=False)",
"_____no_output_____"
],
[
"custom_bins_compact = [daily_returns.min(),-3,-1.5,-1.0,1.0,1.5,3.0,daily_returns.max()]\ncategorized_daily_returns = pd.cut(daily_returns, bins=custom_bins_compact)\ncategorized_daily_returns.value_counts(normalize=True,sort=False)",
"_____no_output_____"
]
],
[
[
"## Weekly Return Distribution (For 15 years)",
"_____no_output_____"
]
],
[
[
"weekly_nifty_data = nifty_data.resample('W').agg(niftyutils.OHLC_CONVERSION_DICT)\nweekly_nifty_data.index = weekly_nifty_data.index - to_offset('6D')",
"_____no_output_____"
],
[
"weekly_returns = (weekly_nifty_data['Close']/weekly_nifty_data['Close'].shift(1) - 1)*100\nweekly_returns = weekly_returns.dropna().rename('returns')",
"_____no_output_____"
],
[
"weekly_returns.describe()",
"_____no_output_____"
],
[
"plt.figure(figsize=[8,7])\nplt.style.use(\"bmh\")\n\nplt.hist(weekly_returns, density = True, bins=20, color='#2ab0ff',alpha=0.55)\n\nplt.xlabel('% return', fontsize=15)\nplt.xticks(fontsize=12)\nplt.yticks(fontsize=12)\n\nplt.tick_params(left = False, bottom = False)\nplt.title('NIFTY weekly % returns ({} samples)'.format(len(weekly_returns)),fontsize=15)\nplt.grid(False)\nplt.show()",
"_____no_output_____"
],
[
"custom_bins_compact = [weekly_returns.min(),-5,-2.5,2.5,5,weekly_returns.max()]\ncategorized_weekly_returns = pd.cut(weekly_returns, bins=custom_bins_compact)\ncategorized_weekly_returns.value_counts(normalize=True,sort=False)",
"_____no_output_____"
],
[
"custom_bins_labels = ['-ve Extreme','-ve','normal','+ve','+ve Extreme']\nreturn_categories = pd.cut(weekly_returns, bins=custom_bins_compact,labels=custom_bins_labels).rename('category')\nweekly_returns_categorized = pd.concat([weekly_returns, return_categories], axis=1)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c3299f8d1e9528947fecb3f2261f46cb14c889 | 192,153 | ipynb | Jupyter Notebook | FA/Q-Learning with Value Function Approximation Solution.ipynb | Bleyddyn/reinforcement-learning | 5140c8879d739c090880391fa47636a15be6e34a | [
"MIT"
] | null | null | null | FA/Q-Learning with Value Function Approximation Solution.ipynb | Bleyddyn/reinforcement-learning | 5140c8879d739c090880391fa47636a15be6e34a | [
"MIT"
] | null | null | null | FA/Q-Learning with Value Function Approximation Solution.ipynb | Bleyddyn/reinforcement-learning | 5140c8879d739c090880391fa47636a15be6e34a | [
"MIT"
] | null | null | null | 488.938931 | 91,008 | 0.925247 | [
[
[
"%matplotlib inline\n\nimport gym\nimport itertools\nimport matplotlib\nimport numpy as np\nimport sys\nimport sklearn.pipeline\nimport sklearn.preprocessing\n\nif \"../\" not in sys.path:\n sys.path.append(\"../\") \n\nfrom lib import plotting\nfrom sklearn.linear_model import SGDRegressor\nfrom sklearn.kernel_approximation import RBFSampler\n\nmatplotlib.style.use('ggplot')",
"_____no_output_____"
],
[
"env = gym.envs.make(\"MountainCar-v0\")",
"[2016-11-06 15:54:37,301] Making new env: MountainCar-v0\n"
],
[
"# Feature Preprocessing: Normalize to zero mean and unit variance\n# We use a few samples from the observation space to do this\nobservation_examples = np.array([env.observation_space.sample() for x in range(10000)])\nscaler = sklearn.preprocessing.StandardScaler()\nscaler.fit(observation_examples)\n\n# Used to converte a state to a featurizes represenation.\n# We use RBF kernels with different variances to cover different parts of the space\nfeaturizer = sklearn.pipeline.FeatureUnion([\n (\"rbf1\", RBFSampler(gamma=5.0, n_components=100)),\n (\"rbf2\", RBFSampler(gamma=2.0, n_components=100)),\n (\"rbf3\", RBFSampler(gamma=1.0, n_components=100)),\n (\"rbf4\", RBFSampler(gamma=0.5, n_components=100))\n ])\nfeaturizer.fit(scaler.transform(observation_examples))",
"_____no_output_____"
],
[
"class Estimator():\n \"\"\"\n Value Function approximator. \n \"\"\"\n \n def __init__(self):\n # We create a separate model for each action in the environment's\n # action space. Alternatively we could somehow encode the action\n # into the features, but this way it's easier to code up.\n self.models = []\n for _ in range(env.action_space.n):\n model = SGDRegressor(learning_rate=\"constant\")\n # We need to call partial_fit once to initialize the model\n # or we get a NotFittedError when trying to make a prediction\n # This is quite hacky.\n model.partial_fit([self.featurize_state(env.reset())], [0])\n self.models.append(model)\n \n def featurize_state(self, state):\n \"\"\"\n Returns the featurized representation for a state.\n \"\"\"\n scaled = scaler.transform([state])\n featurized = featurizer.transform(scaled)\n return featurized[0]\n \n def predict(self, s, a=None):\n \"\"\"\n Makes value function predictions.\n \n Args:\n s: state to make a prediction for\n a: (Optional) action to make a prediction for\n \n Returns\n If an action a is given this returns a single number as the prediction.\n If no action is given this returns a vector or predictions for all actions\n in the environment where pred[i] is the prediction for action i.\n \n \"\"\"\n features = self.featurize_state(s)\n if not a:\n return np.array([m.predict([features])[0] for m in self.models])\n else:\n return self.models[a].predict([features])[0]\n \n def update(self, s, a, y):\n \"\"\"\n Updates the estimator parameters for a given state and action towards\n the target y.\n \"\"\"\n features = self.featurize_state(s)\n self.models[a].partial_fit([features], [y])",
"_____no_output_____"
],
[
"def make_epsilon_greedy_policy(estimator, epsilon, nA):\n \"\"\"\n Creates an epsilon-greedy policy based on a given Q-function approximator and epsilon.\n \n Args:\n estimator: An estimator that returns q values for a given state\n epsilon: The probability to select a random action . float between 0 and 1.\n nA: Number of actions in the environment.\n \n Returns:\n A function that takes the observation as an argument and returns\n the probabilities for each action in the form of a numpy array of length nA.\n \n \"\"\"\n def policy_fn(observation):\n A = np.ones(nA, dtype=float) * epsilon / nA\n q_values = estimator.predict(observation)\n best_action = np.argmax(q_values)\n A[best_action] += (1.0 - epsilon)\n return A\n return policy_fn",
"_____no_output_____"
],
[
"def q_learning(env, estimator, num_episodes, discount_factor=1.0, epsilon=0.1, epsilon_decay=1.0):\n \"\"\"\n Q-Learning algorithm for fff-policy TD control using Function Approximation.\n Finds the optimal greedy policy while following an epsilon-greedy policy.\n \n Args:\n env: OpenAI environment.\n estimator: Action-Value function estimator\n num_episodes: Number of episodes to run for.\n discount_factor: Lambda time discount factor.\n epsilon: Chance the sample a random action. Float betwen 0 and 1.\n epsilon_decay: Each episode, epsilon is decayed by this factor\n \n Returns:\n An EpisodeStats object with two numpy arrays for episode_lengths and episode_rewards.\n \"\"\"\n\n # Keeps track of useful statistics\n stats = plotting.EpisodeStats(\n episode_lengths=np.zeros(num_episodes),\n episode_rewards=np.zeros(num_episodes)) \n \n for i_episode in range(num_episodes):\n \n # The policy we're following\n policy = make_epsilon_greedy_policy(\n estimator, epsilon * epsilon_decay**i_episode, env.action_space.n)\n \n # Print out which episode we're on, useful for debugging.\n # Also print reward for last episode\n last_reward = stats.episode_rewards[i_episode - 1]\n sys.stdout.flush()\n \n # Reset the environment and pick the first action\n state = env.reset()\n \n # Only used for SARSA, not Q-Learning\n next_action = None\n \n # One step in the environment\n for t in itertools.count():\n \n # Choose an action to take\n # If we're using SARSA we already decided in the previous step\n if next_action is None:\n action_probs = policy(state)\n action = np.random.choice(np.arange(len(action_probs)), p=action_probs)\n else:\n action = next_action\n \n # Take a step\n next_state, reward, done, _ = env.step(action)\n \n # Update statistics\n stats.episode_rewards[i_episode] += reward\n stats.episode_lengths[i_episode] = t\n \n # TD Update\n q_values_next = estimator.predict(next_state)\n \n # Use this code for Q-Learning\n # Q-Value TD Target\n td_target = reward + discount_factor * np.max(q_values_next)\n \n # Use this code for SARSA TD Target for on policy-training:\n # next_action_probs = policy(next_state)\n # next_action = np.random.choice(np.arange(len(next_action_probs)), p=next_action_probs) \n # td_target = reward + discount_factor * q_values_next[next_action]\n \n # Update the function approximator using our target\n estimator.update(state, action, td_target)\n \n print(\"\\rStep {} @ Episode {}/{} ({})\".format(t, i_episode + 1, num_episodes, last_reward), end=\"\")\n \n if done:\n break\n \n state = next_state\n \n return stats",
"_____no_output_____"
],
[
"estimator = Estimator()",
"_____no_output_____"
],
[
"# Note: For the Mountain Car we don't actually need an epsilon > 0.0\n# because our initial estimate for all states is too \"optimistic\" which leads\n# to the exploration of all states.\nstats = q_learning(env, estimator, 100, epsilon=0.0)",
"Step 110 @ Episode 100/100 (-163.0)"
],
[
"plotting.plot_cost_to_go_mountain_car(env, estimator)\nplotting.plot_episode_stats(stats, smoothing_window=25)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c34e9d5136dd88e7d95b0f6b9713a31d89f469 | 3,182 | ipynb | Jupyter Notebook | notebooks/pymt-demo/Introduction.ipynb | csdms/jetstream | 42ec2f923b3479387105171ed384989033d9200a | [
"MIT"
] | null | null | null | notebooks/pymt-demo/Introduction.ipynb | csdms/jetstream | 42ec2f923b3479387105171ed384989033d9200a | [
"MIT"
] | null | null | null | notebooks/pymt-demo/Introduction.ipynb | csdms/jetstream | 42ec2f923b3479387105171ed384989033d9200a | [
"MIT"
] | null | null | null | 27.196581 | 171 | 0.563168 | [
[
[
"# PyMT Beta: The Python Modeling Tool\n\n* Eric Hutton\n* Greg Tucker\n\n*CSDMS All-Hands Meeting, May 2017*",
"_____no_output_____"
],
[
"## Things I wished I could have done in grad school...\n\n* Run model(s) in an ***interactive, command-line environment*** complete with plotting and data analysis abilities\n* Play with model(s) by ***running interactively***: advance forward, pause to plot output, inspect values, change parameters or state variables … all on the fly!\n* ***Easily couple models*** by running them iteratively and exchanging data\n* Learn to use models easily, because they have a ***standard basic interface***",
"_____no_output_____"
],
[
"# CSDMS Basic Model Interface (BMI)\n\n* The BMI is a collection of standardized functions (a.k.a. subroutines) that you add to your model code.\n* They do a few basic things:\n * ***initialize*** the model\n * ***run*** the model for a given period of time or # of steps\n * ***finalize*** (clean up) the model\n * ***get value(s)*** of a variable or parameter\n * ***set value(s)*** of a variable or parameter\n* BMI functions are defined for C, C++, Fortran, Python, and Java\n\n## To learn more\n\n* Peckham et al. (2013) Computers & Geosciences\n* http://bmi-spec.readthedocs.io\n",
"_____no_output_____"
],
[
"# Oh the places you'll go\n\nUpon BMI graduation, things CSDMS adds to your model:\n* Thing 1: Write a recipe that builds your model\n* Thing 2: Build and deliver it to the Bakery\n* Thing 3: Build and test regularly\n* Thing 4: Babelize\n* Thing 5: Build and test regularly\n* Thing 6: ***PyMT component***",
"_____no_output_____"
],
[
"# We’ll run through 3 examples...\n\n* Example 1: Running a generic BMI model in PyMT\n* Example 2: Running Child in PyMT\n* Example 3: Coupling Child and Sedflux3D in PyMT\n",
"_____no_output_____"
],
[
"# Questions?\n\nYou should now be able to:\n* Run any PyMT component model\n* Couple two PyMT components\n",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0c3605c7a2f1c3d3053be224a11cc85a4e903f2 | 47,806 | ipynb | Jupyter Notebook | src/Next Steps with Python/.ipynb_checkpoints/Writing Reusable Code using Functions in Python-checkpoint.ipynb | sahanashetty31/Data-Analysis-with-Python-Zero-to-Pandas | 3128e1ad931757965efdc6434c4b1473d4d2b485 | [
"MIT"
] | null | null | null | src/Next Steps with Python/.ipynb_checkpoints/Writing Reusable Code using Functions in Python-checkpoint.ipynb | sahanashetty31/Data-Analysis-with-Python-Zero-to-Pandas | 3128e1ad931757965efdc6434c4b1473d4d2b485 | [
"MIT"
] | null | null | null | src/Next Steps with Python/.ipynb_checkpoints/Writing Reusable Code using Functions in Python-checkpoint.ipynb | sahanashetty31/Data-Analysis-with-Python-Zero-to-Pandas | 3128e1ad931757965efdc6434c4b1473d4d2b485 | [
"MIT"
] | 1 | 2021-08-02T19:38:17.000Z | 2021-08-02T19:38:17.000Z | 32.041555 | 1,934 | 0.587876 | [
[
[
"# Writing Reusable Code using Functions in Python\n\n\n\n### Part 4 of \"Data Analysis with Python: Zero to Pandas\"\n\n\n\n",
"_____no_output_____"
],
[
"This tutorial covers the following topics:\n\n- Creating and using functions in Python\n- Local variables, return values, and optional arguments\n- Reusing functions and using Python library functions\n- Exception handling using `try`-`except` blocks\n- Documenting functions using docstrings",
"_____no_output_____"
],
[
"## Creating and using functions\n\nA function is a reusable set of instructions that takes one or more inputs, performs some operations, and often returns an output. Python contains many in-built functions like `print`, `len`, etc., and provides the ability to define new ones.",
"_____no_output_____"
]
],
[
[
"today = \"Saturday\"\nprint(\"Today is\", today)",
"Today is Saturday\n"
]
],
[
[
"You can define a new function using the `def` keyword.",
"_____no_output_____"
]
],
[
[
"def say_hello():\n print('Hello there!')\n print('How are you?')",
"_____no_output_____"
]
],
[
[
"Note the round brackets or parentheses `()` and colon `:` after the function's name. Both are essential parts of the syntax. The function's *body* contains an indented block of statements. \n\nThe statements inside a function's body are not executed when the function is defined. To execute the statements, we need to *call* or *invoke* the function.",
"_____no_output_____"
]
],
[
[
"say_hello()",
"Hello there!\nHow are you?\n"
]
],
[
[
"### Function arguments\n\nFunctions can accept zero or more values as *inputs* (also knows as *arguments* or *parameters*). Arguments help us write flexible functions that can perform the same operations on different values. Further, functions can return a result that can be stored in a variable or used in other expressions.\n\nHere's a function that filters out the even numbers from a list and returns a new list using the `return` keyword.",
"_____no_output_____"
]
],
[
[
"def say_hello(name):\n print('Hello {}'.format(name))",
"_____no_output_____"
],
[
"say_hello(\"John\")",
"Hello John\n"
],
[
"say_hello(\"Jane\")",
"Hello Jane\n"
],
[
"def filter_even(number_list):\n result_list = []\n for number in number_list:\n if number % 2 == 0:\n result_list.append(number)\n return result_list",
"_____no_output_____"
]
],
[
[
"Can you understand what the function does by looking at the code? If not, try executing each line of the function's body separately within a code cell with an actual list of numbers in place of `number_list`.",
"_____no_output_____"
]
],
[
[
"even_list = filter_even([1, 2, 3, 4, 5, 6, 7])",
"_____no_output_____"
],
[
"even_list",
"_____no_output_____"
],
[
"filter_even([1, 3, 5, 7])",
"_____no_output_____"
]
],
[
[
"## Writing great functions in Python\n\nAs a programmer, you will spend most of your time writing and using functions. Python offers many features to make your functions powerful and flexible. Let's explore some of these by solving a problem:\n\n> Radha is planning to buy a house that costs `$1,260,000`. She considering two options to finance her purchase:\n>\n> * Option 1: Make an immediate down payment of `$300,000`, and take loan 8-year loan with an interest rate of 10% (compounded monthly) for the remaining amount.\n> * Option 2: Take a 10-year loan with an interest rate of 8% (compounded monthly) for the entire amount.\n>\n> Both these loans have to be paid back in equal monthly installments (EMIs). Which loan has a lower EMI among the two?\n\n\nSince we need to compare the EMIs for two loan options, defining a function to calculate the EMI for a loan would be a great idea. The inputs to the function would be cost of the house, the down payment, duration of the loan, rate of interest etc. We'll build this function step by step.\n\nFirst, let's write a simple function that calculates the EMI on the entire cost of the house, assuming that the loan must be paid back in one year, and there is no interest or down payment.",
"_____no_output_____"
]
],
[
[
"def loan_emi(amount):\n emi = amount / 12\n print('The EMI is ${}'.format(emi))",
"_____no_output_____"
],
[
"loan_emi(12_60_000)",
"The EMI is $105000.0\n"
]
],
[
[
"### Local variables and scope\n\nLet's add a second argument to account for the duration of the loan in months.",
"_____no_output_____"
]
],
[
[
"def loan_emi(amount, duration):\n emi = amount / duration\n print('The EMI is ${}'.format(emi))\n",
"_____no_output_____"
]
],
[
[
"Note that the variable `emi` defined inside the function is not accessible outside. The same is true for the parameters `amount` and `duration`. These are all *local variables* that lie within the *scope* of the function.\n\n> **Scope**: Scope refers to the region within the code where a particular variable is visible. Every function (or class definition) defines a scope within Python. Variables defined in this scope are called *local variables*. Variables that are available everywhere are called *global variables*. Scope rules allow you to use the same variable names in different functions without sharing values from one to the other. ",
"_____no_output_____"
]
],
[
[
"emi",
"_____no_output_____"
],
[
"amount",
"_____no_output_____"
],
[
"duration",
"_____no_output_____"
]
],
[
[
"because above variables are all the local variables",
"_____no_output_____"
],
[
"We can now compare a 6-year loan vs. a 10-year loan (assuming no down payment or interest).",
"_____no_output_____"
]
],
[
[
"loan_emi(12_60_600, 8*12)",
"The EMI is $13131.25\n"
],
[
"loan_emi(12_60_000, 10*12)",
"The EMI is $10500.0\n"
]
],
[
[
"### Return values\n\nAs you might expect, the EMI for the 6-year loan is higher compared to the 10-year loan. Right now, we're printing out the result. It would be better to return it and store the results in variables for easier comparison. We can do this using the `return` statement",
"_____no_output_____"
]
],
[
[
"def loan_emi(amount, duration):\n emi = amount / duration\n return emi",
"_____no_output_____"
],
[
"emi1 = loan_emi(12_60_000, 8*12)",
"_____no_output_____"
],
[
"emi2 = loan_emi(12_60_000, 10*12)",
"_____no_output_____"
],
[
"emi1",
"_____no_output_____"
],
[
"emi2",
"_____no_output_____"
],
[
"emi1 - emi2",
"_____no_output_____"
]
],
[
[
"### Optional arguments\n\nNext, let's add another argument to account for the immediate down payment. We'll make this an *optional argument* with a default value of 0.",
"_____no_output_____"
]
],
[
[
"def loan_emi(amount, duration, down_payment=0):\n loan_amount = amount - down_payment\n emi = amount / duration\n return emi",
"_____no_output_____"
],
[
"emi1 = loan_emi(12_60_000, 8*12, 3e5)",
"_____no_output_____"
],
[
"emi1",
"_____no_output_____"
],
[
"emi2 = loan_emi(12_60_000, 10*12)",
"_____no_output_____"
],
[
"emi2",
"_____no_output_____"
]
],
[
[
"Next, let's add the interest calculation into the function. Here's the formula used to calculate the EMI for a loan:\n\n<img src=\"https://i.imgur.com/iKujHGK.png\" style=\"width:240px\">\n\nwhere:\n\n* `P` is the loan amount (principal)\n* `n` is the no. of months\n* `r` is the rate of interest per month\n\nThe derivation of this formula is beyond the scope of this tutorial. See this video for an explanation: https://youtu.be/Coxza9ugW4E .",
"_____no_output_____"
]
],
[
[
"def loan_emi(amount, duration, rate, down_payment=0):\n loan_amount = amount - down_payment\n emi = loan_amount * rate * ((1 + rate)**duration) / (((1 + rate)**duration)-1)\n return emi",
"_____no_output_____"
]
],
[
[
"Note that while defining the function, required arguments like `cost`, `duration` and `rate` must appear before optional arguments like `down_payment`.\n\nLet's calculate the EMI for Option 1",
"_____no_output_____"
]
],
[
[
"loan_emi(12_60_000, 8*12, 0.1/12, 3e5)",
"_____no_output_____"
]
],
[
[
"While calculating the EMI for Option 2, we need not include the `down_payment` argument.",
"_____no_output_____"
]
],
[
[
"loan_emi(12_60_000,10*12, 0.08/12)",
"_____no_output_____"
]
],
[
[
"### Named arguments\n\nInvoking a function with many arguments can often get confusing and is prone to human errors. Python provides the option of invoking functions with *named* arguments for better clarity. You can also split function invocation into multiple lines.",
"_____no_output_____"
]
],
[
[
"emi1 = loan_emi(\n amount = 12_60_000,\n duration=8*12,\n rate=0.1/12,\n down_payment=3e5\n)",
"_____no_output_____"
],
[
"emi1",
"_____no_output_____"
],
[
"emi2 = loan_emi(amount=1260000, duration=10*12, rate=0.08/12)",
"_____no_output_____"
],
[
"emi2",
"_____no_output_____"
],
[
"def round_up(x):\n emi2 = loan_emi(amount=1260000, duration=10*12, rate=0.08/12)\n return round(emi2)\nround_up(emi2)",
"_____no_output_____"
]
],
[
[
"### Modules and library functions\n\nWe can already see that the EMI for Option 1 is lower than the EMI for Option 2. However, it would be nice to round up the amount to full dollars, rather than showing digits after the decimal. To achieve this, we might want to write a function that can take a number and round it up to the next integer (e.g., 1.2 is rounded up to 2). That would be a great exercise to try out!\n\nHowever, since rounding numbers is a fairly common operation, Python provides a function for it (along with thousands of other functions) as part of the [Python Standard Library](https://docs.python.org/3/library/). Functions are organized into *modules* that need to be imported to use the functions they contain. \n\n> **Modules**: Modules are files containing Python code (variables, functions, classes, etc.). They provide a way of organizing the code for large Python projects into files and folders. The key benefit of using modules is _namespaces_: you must import the module to use its functions within a Python script or notebook. Namespaces provide encapsulation and avoid naming conflicts between your code and a module or across modules.\n\nWe can use the `ceil` function (short for *ceiling*) from the `math` module to round up numbers. Let's import the module and use it to round up the number `1.2`. ",
"_____no_output_____"
]
],
[
[
"import math",
"_____no_output_____"
],
[
"help(math.ceil)",
"Help on built-in function ceil in module math:\n\nceil(x, /)\n Return the ceiling of x as an Integral.\n \n This is the smallest integer >= x.\n\n"
],
[
"math.ceil(1.2)",
"_____no_output_____"
]
],
[
[
"Let's now use the `math.ceil` function within the `home_loan_emi` function to round up the EMI amount. \n\n> Using functions to build other functions is a great way to reuse code and implement complex business logic while still keeping the code small, understandable, and manageable. Ideally, a function should do one thing and one thing only. If you find yourself writing a function that does too many things, consider splitting it into multiple smaller, independent functions. As a rule of thumb, try to limit your functions to 10 lines of code or less. Good programmers always write short, simple, and readable functions.\n\n",
"_____no_output_____"
]
],
[
[
"def loan_emi(amount, duration, rate, down_payment=0):\n loan_amount = amount - down_payment\n emi = loan_amount * rate * ((1+rate)**duration) / (((1+rate)**duration)-1)\n emi = math.ceil(emi)\n return emi",
"_____no_output_____"
],
[
"emi1 = loan_emi(\n amount=1260000, \n duration=8*12, \n rate=0.1/12, \n down_payment=3e5\n)",
"_____no_output_____"
],
[
"emi1",
"_____no_output_____"
],
[
"emi2 = loan_emi(amount=1260000, duration=10*12, rate=0.08/12)",
"_____no_output_____"
],
[
"emi2",
"_____no_output_____"
]
],
[
[
"Let's compare the EMIs and display a message for the option with the lower EMI.",
"_____no_output_____"
]
],
[
[
"if emi1 < emi2:\n print('Option 1 has the lower EMI: ${}'.format(emi1))\nelse:\n print('Option 2 has the lower EMI: ${}'.format(emi2))",
"Option 1 has the lower EMI: $14568\n"
]
],
[
[
"### Reusing and improving functions \n\nNow we know for sure that \"Option 1\" has the lower EMI among the two options. But what's even better is that we now have a handy function `loan_emi` that we can use to solve many other similar problems with just a few lines of code. Let's try it with a couple more questions.",
"_____no_output_____"
],
[
"> **Q**: Shaun is currently paying back a home loan for a house he bought a few years ago. The cost of the house was `$800,000`. Shaun made a down payment of `25%` of the price. He financed the remaining amount using a 6-year loan with an interest rate of `7%` per annum (compounded monthly). Shaun is now buying a car worth `$60,000`, which he is planning to finance using a 1-year loan with an interest rate of `12%` per annum. Both loans are paid back in EMIs. What is the total monthly payment Shaun makes towards loan repayment?\n\nThis question is now straightforward to solve, using the `loan_emi` function we've already defined.",
"_____no_output_____"
]
],
[
[
"cost_of_the_house = 800000\nhome_loan_duration = 6*12 #months\nhome_loan_rate = 0.07/12 #monthly\nhome_down_payment = .25 * 800000\n\nemi_house = loan_emi(amount=cost_of_the_house,\n duration=home_loan_duration,\n rate=home_loan_rate,\n down_payment=home_down_payment)\nemi_house",
"_____no_output_____"
],
[
"cost_of_car = 60000\ncar_loan_duration = 1*12 #months\ncar_loan_rate = .12*12 #monthly\n\nemi_car = loan_emi(amount=cost_of_car,\n duration=car_loan_duration,\n rate=car_loan_rate)\nemi_car",
"_____no_output_____"
],
[
"print(\"Shaun makes a total monthly payment of ${} towards loan repayments.\".format(emi_house+emi_car))",
"Shaun makes a total monthly payment of $96632 towards loan repayments.\n"
]
],
[
[
"### Exceptions and `try`-`except`\n\n> Q: If you borrow `$100,000` using a 10-year loan with an interest rate of 9% per annum, what is the total amount you end up paying as interest?\n\nOne way to solve this problem is to compare the EMIs for two loans: one with the given rate of interest and another with a 0% rate of interest. The total interest paid is then simply the sum of monthly differences over the duration of the loan.",
"_____no_output_____"
]
],
[
[
"emi_with_interest = loan_emi(amount=100000, duration=10*12, rate=0.09/12)\nemi_with_interest",
"_____no_output_____"
],
[
"emi_without_interest = loan_emi(amount=100000, duration=10*12, rate=0./12)\nemi_without_interest",
"_____no_output_____"
]
],
[
[
"Something seems to have gone wrong! If you look at the error message above carefully, Python tells us precisely what is wrong. Python *throws* a `ZeroDivisionError` with a message indicating that we're trying to divide a number by zero. `ZeroDivisonError` is an *exception* that stops further execution of the program.\n\n> **Exception**: Even if a statement or expression is syntactically correct, it may cause an error when the Python interpreter tries to execute it. Errors detected during execution are called exceptions. Exceptions typically stop further execution of the program unless handled within the program using `try`-`except` statements.\n\nPython provides many built-in exceptions *thrown* when built-in operators, functions, or methods are used incorrectly: https://docs.python.org/3/library/exceptions.html#built-in-exceptions. You can also define your custom exception by extending the `Exception` class (more on that later).\n\nYou can use the `try` and `except` statements to *handle* an exception. Here's an example:",
"_____no_output_____"
]
],
[
[
"try:\n print(\"Now computing the result..\")\n result = 5 / 0\n print(\"Computation was completed successfully\")\nexcept ZeroDivisionError:\n print(\"Failed to compute result because you were trying to divide by zero\")\n result = None\n \nprint(result)",
"Now computing the result..\nFailed to compute result because you were trying to divide by zero\nNone\n"
]
],
[
[
"When an exception occurs inside a `try` block, the block's remaining statements are skipped. The `except` block is executed if the type of exception thrown matches that of the exception being handled. After executing the `except` block, the program execution returns to the normal flow.\n\nYou can also handle more than one type of exception using multiple `except` statements. Learn more about exceptions here: https://www.w3schools.com/python/python_try_except.asp .\n\nLet's enhance the `loan_emi` function to use `try`-`except` to handle the scenario where the interest rate is 0%. It's common practice to make changes/enhancements to functions over time as new scenarios and use cases come up. It makes functions more robust & versatile.",
"_____no_output_____"
]
],
[
[
"def loan_emi(amount, duration, rate, down_payment=0):\n loan_amount = amount - down_payment\n try:\n emi = loan_amount * rate * ((1+rate)**duration) / (((1+rate)**duration)-1)\n except ZeroDivisionError:\n emi = loan_amount / duration\n emi = math.ceil(emi)\n return emi",
"_____no_output_____"
]
],
[
[
"We can use the updated `loan_emi` function to solve our problem.\n\n> **Q**: If you borrow `$100,000` using a 10-year loan with an interest rate of 9% per annum, what is the total amount you end up paying as interest?\n\n",
"_____no_output_____"
]
],
[
[
"emi_with_interest = loan_emi(amount=100000, duration=10*12, rate=0.09/12)\nemi_with_interest",
"_____no_output_____"
],
[
"emi_without_interest = loan_emi(amount=100000, duration=10*12, rate=0)\nemi_without_interest",
"_____no_output_____"
],
[
"total_interest = (emi_with_interest - emi_without_interest) * 10*12",
"_____no_output_____"
],
[
"print(\"The total interest paid is ${}.\".format(total_interest))",
"The total interest paid is $51960.\n"
]
],
[
[
"### Documenting functions using Docstrings\n\nWe can add some documentation within our function using a *docstring*. A docstring is simply a string that appears as the first statement within the function body, and is used by the `help` function. A good docstring describes what the function does, and provides some explanation about the arguments.",
"_____no_output_____"
]
],
[
[
"def loan_emi(amount, duration, rate, down_payment=0):\n \"\"\"Calculates the equal montly installment (EMI) for a loan.\n \n Arguments:\n amount - Total amount to be spent (loan + down payment)\n duration - Duration of the loan (in months)\n rate - Rate of interest (monthly)\n down_payment (optional) - Optional intial payment (deducted from amount)\n \"\"\"\n loan_amount = amount - down_payment\n try:\n emi = loan_amount * rate * ((1+rate)**duration) / (((1+rate)**duration)-1)\n except ZeroDivisionError:\n emi = loan_amount / duration\n emi = math.ceil(emi)\n return emi",
"_____no_output_____"
]
],
[
[
"In the docstring above, we've provided some additional information that the `duration` and `rate` are measured in months. You might even consider naming the arguments `duration_months` and `rate_monthly`, to avoid any confusion whatsoever. Can you think of some other ways to improve the function?",
"_____no_output_____"
]
],
[
[
"help(loan_emi)",
"Help on function loan_emi in module __main__:\n\nloan_emi(amount, duration, rate, down_payment=0)\n Calculates the equal montly installment (EMI) for a loan.\n \n Arguments:\n amount - Total amount to be spent (loan + down payment)\n duration - Duration of the loan (in months)\n rate - Rate of interest (monthly)\n down_payment (optional) - Optional intial payment (deducted from amount)\n\n"
]
],
[
[
"## Exercise - Data Analysis for Vacation Planning\n\nYou're planning a vacation, and you need to decide which city you want to visit. You have shortlisted four cities and identified the return flight cost, daily hotel cost, and weekly car rental cost. While renting a car, you need to pay for entire weeks, even if you return the car sooner.\n\n\n| City | Return Flight (`$`) | Hotel per day (`$`) | Weekly Car Rental (`$`) | \n|------|--------------------------|------------------|------------------------|\n| Paris| 200 | 20 | 200 |\n| London| 250 | 30 | 120 |\n| Dubai| 370 | 15 | 80 |\n| Mumbai| 450 | 10 | 70 | \n\n\nAnswer the following questions using the data above:\n\n1. If you're planning a 1-week long trip, which city should you visit to spend the least amount of money?\n2. How does the answer to the previous question change if you change the trip's duration to four days, ten days or two weeks?\n3. If your total budget for the trip is `$1000`, which city should you visit to maximize the duration of your trip? Which city should you visit if you want to minimize the duration?\n4. How does the answer to the previous question change if your budget is `$600`, `$2000`, or `$1500`?\n\n*Hint: To answer these questions, it will help to define a function `cost_of_trip` with relevant inputs like flight cost, hotel rate, car rental rate, and duration of the trip. You may find the `math.ceil` function useful for calculating the total cost of car rental.*",
"_____no_output_____"
]
],
[
[
"# Use these cells to answer the question - build the function step-by-step\nparis_dict = dict(city=\"Paris\", cost_return_flight=200, cost_hotel_per_night=20, cost_car_rental_weekly=200)\nlondon_dict = dict(city=\"London\", cost_return_flight=250, cost_hotel_per_night=30, cost_car_rental_weekly=120)\ndubai_dict = dict(city=\"Dubai\", cost_return_flight=370, cost_hotel_per_night=15, cost_car_rental_weekly=80)\nmumbai_dict = dict(city=\"Mumbai\", cost_return_flight=450, cost_hotel_per_night=10, cost_car_rental_weekly=70)",
"_____no_output_____"
],
[
"def total_trip_cost_duration(city_dict, trip_duration = 7, trip_budget = 0):\n cost_return_flight = city_dict[\"cost_return_flight\"]\n cost_hotel_per_night = city_dict[\"cost_hotel_per_night\"]\n cost_car_rental_weekly = city_dict[\"cost_car_rental_weekly\"]\n \n total_trip_cost = 0\n total_hotel_cost = 0\n total_car_rental = 0\n total_trip_duration_days = 0\n total_trip_duration_weeks = 0\n remaining_budget = 0\n \n total_trip_cost += cost_return_flight\n \n if(not trip_budget):\n total_hotel_cost = cost_hotel_per_night * trip_duration\n total_trip_duration_weeks = (trip_duration // 7) if (trip_duration % 7 == 0) else (trip_duration // 7 + 1)\n total_car_rental = cost_car_rental_weekly * total_trip_duration_weeks\n total_trip_cost += (total_hotel_cost + total_car_rental)\n total_trip_duration_days = trip_duration\n else:\n remaining_budget = trip_budget - total_trip_cost\n total_cost_per_week = (cost_car_rental_weekly + (cost_hotel_per_night * 7))\n total_trip_duration_weeks = (remaining_budget // total_cost_per_week)\n total_trip_cost += (total_cost_per_week * total_trip_duration_weeks)\n remaining_budget = trip_budget - total_trip_cost\n total_trip_duration_days = total_trip_duration_weeks * 7\n \n if (remaining_budget >= (cost_car_rental_weekly + cost_hotel_per_night)):\n total_trip_duration_weeks += 1\n total_trip_cost += cost_car_rental_weekly\n remaining_budget = trip_budget - total_trip_cost\n remaining_days = remaining_budget // cost_hotel_per_night\n total_trip_cost += (cost_hotel_per_night * remaining_days)\n total_trip_duration_days += remaining_days \n \n return (total_trip_cost, total_trip_duration_days, total_trip_duration_weeks)",
"_____no_output_____"
],
[
"paris_total_trip_cost = total_trip_cost_duration(city_dict = paris_dict, trip_budget = 1150)\nlondon_total_trip_cost = total_trip_cost_duration(city_dict = london_dict, trip_duration = 14)\ndubai_total_trip_cost = total_trip_cost_duration(city_dict = dubai_dict, trip_duration = 16)\nmumbai_total_trip_cost = total_trip_cost_duration(city_dict = mumbai_dict, trip_budget = 1500)\nprint(paris_total_trip_cost)\nprint(london_total_trip_cost)\nprint(dubai_total_trip_cost)\nprint(mumbai_total_trip_cost)",
"(1140, 17, 3)\n(910, 14, 2)\n(850, 16, 3)\n(1430, 49, 7)\n"
]
],
[
[
"## Questions for Revision\n\nTry answering the following questions to test your understanding of the topics covered in this notebook:\n\n1. What is a function?\n2. What are the benefits of using functions?\n3. What are some built-in functions in Python?\n4. How do you define a function in Python? Give an example.\n5. What is the body of a function?\n6. When are the statements in the body of a function executed?\n7. What is meant by calling or invoking a function? Give an example.\n8. What are function arguments? How are they useful?\n9. How do you store the result of a function in a variable?\n10. What is the purpose of the `return` keyword in Python?\n11. Can you return multiple values from a function?\n12. Can a `return` statement be used inside an `if` block or a `for` loop?\n13. Can the `return` keyword be used outside a function?\n14. What is scope in a programming region? \n15. How do you define a variable inside a function?\n16. What are local & global variables?\n17. Can you access the variables defined inside a function outside its body? Why or why not?\n18. What do you mean by the statement \"a function defines a scope within Python\"?\n19. Do for and while loops define a scope, like functions?\n20. Do if-else blocks define a scope, like functions?\n21. What are optional function arguments & default values? Give an example.\n22. Why should the required arguments appear before the optional arguments in a function definition?\n23. How do you invoke a function with named arguments? Illustrate with an example.\n24. Can you split a function invocation into multiple lines?\n25. Write a function that takes a number and rounds it up to the nearest integer.\n26. What are modules in Python?\n27. What is a Python library?\n28. What is the Python Standard Library?\n29. Where can you learn about the modules and functions available in the Python standard library?\n30. How do you install a third-party library?\n31. What is a module namespace? How is it useful?\n32. What problems would you run into if Python modules did not provide namespaces?\n33. How do you import a module?\n34. How do you use a function from an imported module? Illustrate with an example.\n35. Can you invoke a function inside the body of another function? Give an example.\n36. What is the single responsibility principle, and how does it apply while writing functions?\n37. What some characteristics of well-written functions?\n38. Can you use if statements or while loops within a function? Illustrate with an example.\n39. What are exceptions in Python? When do they occur?\n40. How are exceptions different from syntax errors?\n41. What are the different types of in-built exceptions in Python? Where can you learn about them?\n42. How do you prevent the termination of a program due to an exception?\n43. What is the purpose of the `try`-`except` statements in Python?\n44. What is the syntax of the `try`-`except` statements? Give an example.\n45. What happens if an exception occurs inside a `try` block?\n46. How do you handle two different types of exceptions using `except`? Can you have multiple `except` blocks under a single `try` block?\n47. How do you create an `except` block to handle any type of exception?\n48. Illustrate the usage of `try`-`except` inside a function with an example.\n49. What is a docstring? Why is it useful?\n50. How do you display the docstring for a function?\n51. What are *args and **kwargs? How are they useful? Give an example.\n52. Can you define functions inside functions? \n53. What is function closure in Python? How is it useful? Give an example.\n54. What is recursion? Illustrate with an example.\n55. Can functions accept other functions as arguments? Illustrate with an example.\n56. Can functions return other functions as results? Illustrate with an example.\n57. What are decorators? How are they useful?\n58. Implement a function decorator which prints the arguments and result of wrapped functions.\n59. What are some in-built decorators in Python?\n60. What are some popular Python libraries?",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0c36649f728220279a40f40be04597c512ceaa5 | 198,702 | ipynb | Jupyter Notebook | 01-Computer-Vision-Fundamentals/.ipynb_checkpoints/2 Region Masking-checkpoint.ipynb | vyasparthm/AutonomousDriving | 4a767442a7e661dd71f4e160f3c071dc02614ab6 | [
"MIT"
] | null | null | null | 01-Computer-Vision-Fundamentals/.ipynb_checkpoints/2 Region Masking-checkpoint.ipynb | vyasparthm/AutonomousDriving | 4a767442a7e661dd71f4e160f3c071dc02614ab6 | [
"MIT"
] | null | null | null | 01-Computer-Vision-Fundamentals/.ipynb_checkpoints/2 Region Masking-checkpoint.ipynb | vyasparthm/AutonomousDriving | 4a767442a7e661dd71f4e160f3c071dc02614ab6 | [
"MIT"
] | null | null | null | 1,122.610169 | 108,104 | 0.957202 | [
[
[
"# Region Masking\n\nIn last excersize we were able to identify lane lines but that is not enough because there were many more objects/pixels we identified and they were not all lane lines.\nIn this notebook, lets define the region of interest and identify lane lines from there. Its basically a way to make sure your camera sees what you need to see in order to identify \"Lane of Interest\"\n",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\n\nimage = mpimg.imread('../img/test.jpg')\nprint('This image is: ', type(image), \n 'with dimensions:', image.shape)\n\n# Pull out the x and y sizes and make a copy of the image\nysize = image.shape[0]\nxsize = image.shape[1]\nregion_select = np.copy(image)\n\nplt.imshow(region_select)",
"This image is: <class 'numpy.ndarray'> with dimensions: (540, 960, 3)\n"
]
],
[
[
"### Masking the Image\n\nNow that we have been able to pick threshold colors in last quiz, let's go one step further and mark the region of interest in the given image.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\n\nimage = mpimg.imread('../img/test.jpg')\nprint('This image is: ', type(image), \n 'with dimensions:', image.shape)\n\n# Pull out the x and y sizes and make a copy of the image\nysize = image.shape[0]\nxsize = image.shape[1]\nregion_select = np.copy(image)\n\n#plt.imshow(region_select)\n\n\n\n# Define a triangle region of interest \n# Keep in mind the origin (x=0, y=0) is in the upper left in image processing\n# Note: if you run this code, you'll find these are not sensible values!!\n# But you'll get a chance to play with them soon in a quiz \nleft_bottom = [0, 539]\nright_bottom = [900, 539]\napex = [450, 300]\n\n# Fit lines (y=Ax+B) to identify the 3 sided region of interest\n# np.polyfit() returns the coefficients [A, B] of the fit\nfit_left = np.polyfit((left_bottom[0], apex[0]), (left_bottom[1], apex[1]), 1)\nfit_right = np.polyfit((right_bottom[0], apex[0]), (right_bottom[1], apex[1]), 1)\nfit_bottom = np.polyfit((left_bottom[0], right_bottom[0]), (left_bottom[1], right_bottom[1]), 1)\n\n# Find the region inside the lines\nXX, YY = np.meshgrid(np.arange(0, xsize), np.arange(0, ysize))\nregion_thresholds = (YY > (XX*fit_left[0] + fit_left[1])) & \\\n (YY > (XX*fit_right[0] + fit_right[1])) & \\\n (YY < (XX*fit_bottom[0] + fit_bottom[1]))\n\n# Color pixels red which are inside the region of interest\nregion_select[region_thresholds] = [255, 255, 100]\n\n# Display the image\nplt.imshow(region_select)\n",
"This image is: <class 'numpy.ndarray'> with dimensions: (540, 960, 3)\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c36ba39ba20de506af07e89db90734e1637724 | 135,186 | ipynb | Jupyter Notebook | soln/chap03soln.ipynb | pmalo46/ModSimPy | dc5ef44757b59b38215aead6fc4c0d486526c1e5 | [
"MIT"
] | 2 | 2019-04-27T22:43:12.000Z | 2019-11-11T15:12:23.000Z | soln/chap03soln.ipynb | pmalo46/ModSimPy | dc5ef44757b59b38215aead6fc4c0d486526c1e5 | [
"MIT"
] | 33 | 2019-10-09T18:50:22.000Z | 2022-03-21T01:39:48.000Z | soln/chap03soln.ipynb | pmalo46/ModSimPy | dc5ef44757b59b38215aead6fc4c0d486526c1e5 | [
"MIT"
] | null | null | null | 106.028235 | 22,760 | 0.845065 | [
[
[
"# Modeling and Simulation in Python\n\nChapter 3\n\nCopyright 2017 Allen Downey\n\nLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)\n",
"_____no_output_____"
]
],
[
[
"# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\n# import functions from the modsim library\nfrom modsim import *\n\n# set the random number generator\nnp.random.seed(7)",
"_____no_output_____"
]
],
[
[
"## More than one State object\n\nHere's the code from the previous chapter, with two changes:\n\n1. I've added DocStrings that explain what each function does, and what parameters it takes.\n\n2. I've added a parameter named `state` to the functions so they work with whatever `State` object we give them, instead of always using `bikeshare`. That makes it possible to work with more than one `State` object.",
"_____no_output_____"
]
],
[
[
"def step(state, p1, p2):\n \"\"\"Simulate one minute of time.\n \n state: bikeshare State object\n p1: probability of an Olin->Wellesley customer arrival\n p2: probability of a Wellesley->Olin customer arrival\n \"\"\"\n if flip(p1):\n bike_to_wellesley(state)\n \n if flip(p2):\n bike_to_olin(state)\n \ndef bike_to_wellesley(state):\n \"\"\"Move one bike from Olin to Wellesley.\n \n state: bikeshare State object\n \"\"\"\n state.olin -= 1\n state.wellesley += 1\n \ndef bike_to_olin(state):\n \"\"\"Move one bike from Wellesley to Olin.\n \n state: bikeshare State object\n \"\"\"\n state.wellesley -= 1\n state.olin += 1\n \ndef decorate_bikeshare():\n \"\"\"Add a title and label the axes.\"\"\"\n decorate(title='Olin-Wellesley Bikeshare',\n xlabel='Time step (min)', \n ylabel='Number of bikes')",
"_____no_output_____"
]
],
[
[
"And here's `run_simulation`, which is a solution to the exercise at the end of the previous notebook.",
"_____no_output_____"
]
],
[
[
"def run_simulation(state, p1, p2, num_steps):\n \"\"\"Simulate the given number of time steps.\n \n state: State object\n p1: probability of an Olin->Wellesley customer arrival\n p2: probability of a Wellesley->Olin customer arrival\n num_steps: number of time steps\n \"\"\"\n results = TimeSeries() \n for i in range(num_steps):\n step(state, p1, p2)\n results[i] = state.olin\n \n plot(results, label='Olin')",
"_____no_output_____"
]
],
[
[
"Now we can create more than one `State` object:",
"_____no_output_____"
]
],
[
[
"bikeshare1 = State(olin=10, wellesley=2)",
"_____no_output_____"
],
[
"bikeshare2 = State(olin=2, wellesley=10)",
"_____no_output_____"
]
],
[
[
"Whenever we call a function, we indicate which `State` object to work with:",
"_____no_output_____"
]
],
[
[
"bike_to_olin(bikeshare1)",
"_____no_output_____"
],
[
"bike_to_wellesley(bikeshare2)",
"_____no_output_____"
]
],
[
[
"And you can confirm that the different objects are getting updated independently:",
"_____no_output_____"
]
],
[
[
"bikeshare1",
"_____no_output_____"
],
[
"bikeshare2",
"_____no_output_____"
]
],
[
[
"## Negative bikes",
"_____no_output_____"
],
[
"In the code we have so far, the number of bikes at one of the locations can go negative, and the number of bikes at the other location can exceed the actual number of bikes in the system.\n\nIf you run this simulation a few times, it happens often.",
"_____no_output_____"
]
],
[
[
"bikeshare = State(olin=10, wellesley=2)\nrun_simulation(bikeshare, 0.4, 0.2, 60)\ndecorate_bikeshare()",
"_____no_output_____"
]
],
[
[
"We can fix this problem using the `return` statement to exit the function early if an update would cause negative bikes.",
"_____no_output_____"
]
],
[
[
"def bike_to_wellesley(state):\n \"\"\"Move one bike from Olin to Wellesley.\n \n state: bikeshare State object\n \"\"\"\n if state.olin == 0:\n return\n state.olin -= 1\n state.wellesley += 1\n \ndef bike_to_olin(state):\n \"\"\"Move one bike from Wellesley to Olin.\n \n state: bikeshare State object\n \"\"\"\n if state.wellesley == 0:\n return\n state.wellesley -= 1\n state.olin += 1",
"_____no_output_____"
]
],
[
[
"Now if you run the simulation again, it should behave.",
"_____no_output_____"
]
],
[
[
"bikeshare = State(olin=10, wellesley=2)\nrun_simulation(bikeshare, 0.4, 0.2, 60)\ndecorate_bikeshare()",
"_____no_output_____"
]
],
[
[
"## Comparison operators",
"_____no_output_____"
],
[
"The `if` statements in the previous section used the comparison operator `==`. The other comparison operators are listed in the book.\n\nIt is easy to confuse the comparison operator `==` with the assignment operator `=`.\n\nRemember that `=` creates a variable or gives an existing variable a new value.",
"_____no_output_____"
]
],
[
[
"x = 5",
"_____no_output_____"
]
],
[
[
"Whereas `==` compares two values and returns `True` if they are equal.",
"_____no_output_____"
]
],
[
[
"x == 5",
"_____no_output_____"
]
],
[
[
"You can use `==` in an `if` statement.",
"_____no_output_____"
]
],
[
[
"if x == 5:\n print('yes, x is 5')",
"yes, x is 5\n"
]
],
[
[
"But if you use `=` in an `if` statement, you get an error.",
"_____no_output_____"
]
],
[
[
"# If you remove the # from the if statement and run it, you'll get\n# SyntaxError: invalid syntax\n\n#if x = 5:\n# print('yes, x is 5')",
"_____no_output_____"
]
],
[
[
"**Exercise:** Add an `else` clause to the `if` statement above, and print an appropriate message.\n\nReplace the `==` operator with one or two of the other comparison operators, and confirm they do what you expect.",
"_____no_output_____"
],
[
"## Metrics",
"_____no_output_____"
],
[
"Now that we have a working simulation, we'll use it to evaluate alternative designs and see how good or bad they are. The metric we'll use is the number of customers who arrive and find no bikes available, which might indicate a design problem.",
"_____no_output_____"
],
[
"First we'll make a new `State` object that creates and initializes additional state variables to keep track of the metrics.",
"_____no_output_____"
]
],
[
[
"bikeshare = State(olin=10, wellesley=2, \n olin_empty=0, wellesley_empty=0)",
"_____no_output_____"
]
],
[
[
"Next we need versions of `bike_to_wellesley` and `bike_to_olin` that update the metrics.",
"_____no_output_____"
]
],
[
[
"def bike_to_wellesley(state):\n \"\"\"Move one bike from Olin to Wellesley.\n \n state: bikeshare State object\n \"\"\"\n if state.olin == 0:\n state.olin_empty += 1\n return\n state.olin -= 1\n state.wellesley += 1\n \ndef bike_to_olin(state):\n \"\"\"Move one bike from Wellesley to Olin.\n \n state: bikeshare State object\n \"\"\"\n if state.wellesley == 0:\n state.wellesley_empty += 1\n return\n state.wellesley -= 1\n state.olin += 1",
"_____no_output_____"
]
],
[
[
"Now when we run a simulation, it keeps track of unhappy customers.",
"_____no_output_____"
]
],
[
[
"run_simulation(bikeshare, 0.4, 0.2, 60)\ndecorate_bikeshare()",
"_____no_output_____"
]
],
[
[
"After the simulation, we can print the number of unhappy customers at each location.",
"_____no_output_____"
]
],
[
[
"bikeshare.olin_empty",
"_____no_output_____"
],
[
"bikeshare.wellesley_empty",
"_____no_output_____"
]
],
[
[
"## Exercises\n\n**Exercise:** As another metric, we might be interested in the time until the first customer arrives and doesn't find a bike. To make that work, we have to add a \"clock\" to keep track of how many time steps have elapsed:\n\n1. Create a new `State` object with an additional state variable, `clock`, initialized to 0. \n\n2. Write a modified version of `step` that adds one to the clock each time it is invoked.\n\nTest your code by running the simulation and check the value of `clock` at the end.",
"_____no_output_____"
]
],
[
[
"bikeshare = State(olin=10, wellesley=2, \n olin_empty=0, wellesley_empty=0,\n clock=0)",
"_____no_output_____"
],
[
"# Solution\n\ndef step(state, p1, p2):\n \"\"\"Simulate one minute of time.\n \n state: bikeshare State object\n p1: probability of an Olin->Wellesley customer arrival\n p2: probability of a Wellesley->Olin customer arrival\n \"\"\"\n state.clock += 1\n \n if flip(p1):\n bike_to_wellesley(state)\n \n if flip(p2):\n bike_to_olin(state)",
"_____no_output_____"
],
[
"# Solution\n\nrun_simulation(bikeshare, 0.4, 0.2, 60)\ndecorate_bikeshare()",
"_____no_output_____"
],
[
"# Solution\n\nbikeshare",
"_____no_output_____"
]
],
[
[
"**Exercise:** Continuing the previous exercise, let's record the time when the first customer arrives and doesn't find a bike.\n\n1. Create a new `State` object with an additional state variable, `t_first_empty`, initialized to -1 as a special value to indicate that it has not been set. \n\n2. Write a modified version of `step` that checks whether`olin_empty` and `wellesley_empty` are 0. If not, it should set `t_first_empty` to `clock` (but only if `t_first_empty` has not already been set).\n\nTest your code by running the simulation and printing the values of `olin_empty`, `wellesley_empty`, and `t_first_empty` at the end.",
"_____no_output_____"
]
],
[
[
"# Solution\n\nbikeshare = State(olin=10, wellesley=2, \n olin_empty=0, wellesley_empty=0,\n clock=0, t_first_empty=-1)",
"_____no_output_____"
],
[
"# Solution\n\ndef step(state, p1, p2):\n \"\"\"Simulate one minute of time.\n \n state: bikeshare State object\n p1: probability of an Olin->Wellesley customer arrival\n p2: probability of a Wellesley->Olin customer arrival\n \"\"\"\n state.clock += 1\n \n if flip(p1):\n bike_to_wellesley(state)\n \n if flip(p2):\n bike_to_olin(state)\n \n if state.t_first_empty != -1:\n return\n \n if state.olin_empty + state.wellesley_empty > 0:\n state.t_first_empty = state.clock",
"_____no_output_____"
],
[
"# Solution\n\nrun_simulation(bikeshare, 0.4, 0.2, 60)\ndecorate_bikeshare()",
"_____no_output_____"
],
[
"# Solution\n\nbikeshare",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0c37015f921108f7a6f60fc125cfcbe941aa04b | 103,419 | ipynb | Jupyter Notebook | tutorial_pytorch.ipynb | feifang24/cs224u-spr2021 | 35427f8078d7d38596e9ca4ade20ff2fd694a090 | [
"Apache-2.0"
] | 1,122 | 2015-03-28T22:05:47.000Z | 2022-03-31T10:47:29.000Z | tutorial_pytorch.ipynb | feifang24/cs224u-spr2021 | 35427f8078d7d38596e9ca4ade20ff2fd694a090 | [
"Apache-2.0"
] | 65 | 2016-04-03T03:07:56.000Z | 2022-03-22T18:01:54.000Z | tutorial_pytorch.ipynb | feifang24/cs224u-spr2021 | 35427f8078d7d38596e9ca4ade20ff2fd694a090 | [
"Apache-2.0"
] | 679 | 2015-03-31T01:29:04.000Z | 2022-03-31T23:41:20.000Z | 61.050177 | 33,912 | 0.75462 | [
[
[
"# Tutorial: PyTorch",
"_____no_output_____"
]
],
[
[
"__author__ = \"Ignacio Cases\"\n__version__ = \"CS224u, Stanford, Spring 2021\"",
"_____no_output_____"
]
],
[
[
"## Contents\n\n1. [Motivation](#Motivation)\n1. [Importing PyTorch](#Importing-PyTorch)\n1. [Tensors](#Tensors)\n 1. [Tensor creation](#Tensor-creation)\n 1. [Operations on tensors](#Operations-on-tensors)\n1. [GPU computation](#GPU-computation)\n1. [Neural network foundations](#Neural-network-foundations)\n 1. [Automatic differentiation](#Automatic-differentiation)\n 1. [Modules](#Modules)\n 1. [Sequential](#Sequential)\n 1. [Criteria and loss functions](#Criteria-and-loss-functions)\n 1. [Optimization](#Optimization)\n 1. [Training a simple model](#Training-a-simple-model)\n1. [Reproducibility](#Reproducibility)\n1. [References](#References)",
"_____no_output_____"
],
[
"## Motivation",
"_____no_output_____"
],
[
"PyTorch is a Python package designed to carry out scientific computation. We use PyTorch in a range of different environments: local model development, large-scale deployments on big clusters, and even _inference_ in embedded, low-power systems. While similar in many aspects to NumPy, PyTorch enables us to perform fast and efficient training of deep learning and reinforcement learning models not only on the CPU but also on a GPU or other ASICs (Application Specific Integrated Circuits) for AI, such as Tensor Processing Units (TPU).",
"_____no_output_____"
],
[
"## Importing PyTorch",
"_____no_output_____"
],
[
"This tutorial assumes a working installation of PyTorch using your `nlu` environment, but the content applies to any regular installation of PyTorch. If you don't have a working installation of PyTorch, please follow the instructions in [the setup notebook](setup.ipynb).\n\nTo get started working with PyTorch we simply begin by importing the torch module:",
"_____no_output_____"
]
],
[
[
"import torch",
"_____no_output_____"
]
],
[
[
"**Side note**: why not `import pytorch`? The name of the package is `torch` for historical reasons: `torch` is the orginal name of the ancestor of the PyTorch library that got started back in 2002 as a C library with Lua scripting. It was only much later that the original `torch` was ported to Python. The PyTorch project decided to prefix the Py to make clear that this library refers to the Python version, as it was confusing back then to know which `torch` one was referring to. All the internal references to the library use just `torch`. It's possible that PyTorch will be renamed at some point, as the original `torch` is no longer maintained and there is no longer confusion.",
"_____no_output_____"
],
[
"We can see the version installed and determine whether or not we have a GPU-enabled PyTorch install by issuing",
"_____no_output_____"
]
],
[
[
"print(\"PyTorch version {}\".format(torch.__version__))\nprint(\"GPU-enabled installation? {}\".format(torch.cuda.is_available()))",
"PyTorch version 1.8.0\nGPU-enabled installation? False\n"
]
],
[
[
"PyTorch has good [documentation](https://pytorch.org/docs/stable/index.html) but it can take some time to familiarize oneself with the structure of the package; it's worth the effort to do so!\n\nWe will also make use of other imports:",
"_____no_output_____"
]
],
[
[
"import numpy as np",
"_____no_output_____"
]
],
[
[
"## Tensors",
"_____no_output_____"
],
[
"Tensors collections of numbers represented as an array, and are the basic building blocks in PyTorch.\n\nYou are probably already familiar with several types of tensors:\n \n- A scalar, a single number, is a zero-th order tensor.\n \n- A column vector $v$ of dimensionality $d_c \\times 1$ is a tensor of order 1.\n \n- A row vector $x$ of dimensionality $1 \\times d_r$ is a tensor of order 1.\n \n- A matrix $A$ of dimensionality $d_r \\times d_c$ is a tensor of order 2.\n \n- A cube $T$ of dimensionality $d_r \\times d_c \\times d_d$ is a tensor of order 3. \n\nTensors are the fundamental blocks that carry information in our mathematical models, and they are composed using several operations to create mathematical graphs in which information can flow (propagate) forward (functional application) and backwards (using the chain rule). \n\nWe have seen multidimensional arrays in NumPy. These NumPy objects are also a representation of tensors.",
"_____no_output_____"
],
[
"**Side note**: what is a tensor __really__? Tensors are important mathematical objects with applications in multiple domains in mathematics and physics. The term \"tensor\" comes from the usage of these mathematical objects to describe the stretching of a volume of matter under *tension*. They are central objects of study in a subfield of mathematics known as differential geometry, which deals with the geometry of continuous vector spaces. As a very high-level summary (and as first approximation), tensors are defined as multi-linear \"machines\" that have a number of slots (their order, a.k.a. rank), taking a number of \"column\" vectors and \"row\" vectors *to produce a scalar*. For example, a tensor $\\mathbf{A}$ (represented by a matrix with rows and columns that you could write on a sheet of paper) can be thought of having two slots. So when $\\mathbf{A}$ acts upon a column vector $\\mathbf{v}$ and a row vector $\\mathbf{x}$, it returns a scalar:\n \n$$\\mathbf{A}(\\mathbf{x}, \\mathbf{v}) = s$$\n \nIf $\\mathbf{A}$ only acts on the column vector, for example, the result will be another column tensor $\\mathbf{u}$ of one order less than the order of $\\mathbf{A}$. Thus, when $\\mathbf{v}$ acts is similar to \"removing\" its slot: \n\n$$\\mathbf{u} = \\mathbf{A}(\\mathbf{v})$$\n\nThe resulting $\\mathbf{u}$ can later interact with another row vector to produce a scalar or be used in any other way. \n\nThis can be a very powerful way of thinking about tensors, as their slots can guide you when writing code, especially given that PyTorch has a _functional_ approach to modules in which this view is very much highlighted. As we will see below, these simple equations above have a completely straightforward representation in the code. In the end, most of what our models will do is to process the input using this type of functional application so that we end up having a tensor output and a scalar value that measures how good our output is with respect to the real output value in the dataset.",
"_____no_output_____"
],
[
"### Tensor creation",
"_____no_output_____"
],
[
"Let's get started with tensors in PyTorch. The framework supports eight different types ([Lapan 2018](#References)):\n\n- 3 float types (16-bit, 32-bit, 64-bit): `torch.FloatTensor` is the class name for the commonly used 32-bit tensor.\n- 5 integer types (signed 8-bit, unsigned 8-bit, 16-bit, 32-bit, 64-bit): common tensors of these types are the 8-bit unsigned tensor `torch.ByteTensor` and the 64-bit `torch.LongTensor`.\n\nThere are three fundamental ways to create tensors in PyTorch ([Lapan 2018](#References)):\n\n- Call a tensor constructor of a given type, which will create a non-initialized tensor. So we then need to fill this tensor later to be able to use it.\n- Call a built-in method in the `torch` module that returns a tensor that is already initialized.\n- Use the PyTorch–NumPy bridge.",
"_____no_output_____"
],
[
"#### Calling the constructor",
"_____no_output_____"
],
[
"Let's first create a 2 x 3 dimensional tensor of the type `float`:",
"_____no_output_____"
]
],
[
[
"t = torch.FloatTensor(2, 3)\nprint(t)\nprint(t.size())",
"tensor([[9.8091e-45, 0.0000e+00, 0.0000e+00],\n [0.0000e+00, 0.0000e+00, 0.0000e+00]])\ntorch.Size([2, 3])\n"
]
],
[
[
"Note that we specified the dimensions as the arguments to the constructor by passing the numbers directly – and not a list or a tuple, which would have very different outcomes as we will see below! We can always inspect the size of the tensor using the `size()` method.\n\nThe constructor method allocates space in memory for this tensor. However, the tensor is *non-initialized*. In order to initialize it, we need to call any of the tensor initialization methods of the basic tensor types. For example, the tensor we just created has a built-in method `zero_()`:",
"_____no_output_____"
]
],
[
[
"t.zero_()",
"_____no_output_____"
]
],
[
[
"The underscore after the method name is important: it means that the operation happens _in place_: the returned object is the same object but now with different content. A very handy way to construct a tensor using the constructor happens when we have available the content we want to put in the tensor in the form of a Python iterable. In this case, we just pass it as the argument to the constructor:",
"_____no_output_____"
]
],
[
[
"torch.FloatTensor([[1, 2, 3], [4, 5, 6]])",
"_____no_output_____"
]
],
[
[
"#### Calling a method in the torch module",
"_____no_output_____"
],
[
"A very convenient way to create tensors, in addition to using the constructor method, is to use one of the multiple methods provided in the `torch` module. In particular, the `tensor` method allows us to pass a number or iterable as the argument to get the appropriately typed tensor:",
"_____no_output_____"
]
],
[
[
"tl = torch.tensor([1, 2, 3])\nt = torch.tensor([1., 2., 3.])\nprint(\"A 64-bit integer tensor: {}, {}\".format(tl, tl.type()))\nprint(\"A 32-bit float tensor: {}, {}\".format(t, t.type()))",
"A 64-bit integer tensor: tensor([1, 2, 3]), torch.LongTensor\nA 32-bit float tensor: tensor([1., 2., 3.]), torch.FloatTensor\n"
]
],
[
[
"We can create a similar 2x3 tensor to the one above by using the `torch.zeros()` method, passing a sequence of dimensions to it: ",
"_____no_output_____"
]
],
[
[
"t = torch.zeros(2, 3)\nprint(t)",
"tensor([[0., 0., 0.],\n [0., 0., 0.]])\n"
]
],
[
[
"There are many methods for creating tensors. We list some useful ones:",
"_____no_output_____"
]
],
[
[
"t_zeros = torch.zeros_like(t) # zeros_like returns a new tensor\nt_ones = torch.ones(2, 3) # creates a tensor with 1s\nt_fives = torch.empty(2, 3).fill_(5) # creates a non-initialized tensor and fills it with 5\nt_random = torch.rand(2, 3) # creates a uniform random tensor\nt_normal = torch.randn(2, 3) # creates a normal random tensor\n\nprint(t_zeros)\nprint(t_ones)\nprint(t_fives)\nprint(t_random)\nprint(t_normal)",
"tensor([[0., 0., 0.],\n [0., 0., 0.]])\ntensor([[1., 1., 1.],\n [1., 1., 1.]])\ntensor([[5., 5., 5.],\n [5., 5., 5.]])\ntensor([[0.2930, 0.9685, 0.0458],\n [0.9844, 0.1271, 0.2497]])\ntensor([[-0.4341, 0.1566, -0.7059],\n [ 0.5371, 0.0399, 0.5803]])\n"
]
],
[
[
"We now see emerging two important paradigms in PyTorch. The _imperative_ approach to performing operations, using _inplace_ methods, is in marked contrast with an additional paradigm also used in PyTorch, the _functional_ approach, where the returned object is a copy of the original object. Both paradigms have their specific use cases as we will be seeing below. The rule of thumb is that _inplace_ methods are faster and don't require extra memory allocation in general, but they can be tricky to understand (keep this in mind regarding the computational graph that we will see below). _Functional_ methods make the code referentially transparent, which is a highly desired property that makes it easier to understand the underlying math, but we rely on the efficiency of the implementation:",
"_____no_output_____"
]
],
[
[
"# creates a new copy of the tensor that is still linked to\n# the computational graph (see below)\nt1 = torch.clone(t)\nassert id(t) != id(t1), 'Functional methods create a new copy of the tensor'\n\n# To create a new _independent_ copy, we do need to detach\n# from the graph\nt1 = torch.clone(t).detach()",
"_____no_output_____"
]
],
[
[
"#### Using the PyTorch–NumPy bridge",
"_____no_output_____"
],
[
"A quite useful feature of PyTorch is its almost seamless integration with NumPy, which allows us to perform operations on NumPy and interact from PyTorch with the large number of NumPy libraries as well. Converting a NumPy multi-dimensional array into a PyTorch tensor is very simple: we only need to call the `tensor` method with NumPy objects as the argument:",
"_____no_output_____"
]
],
[
[
"# Create a new multi-dimensional array in NumPy with the np datatype (np.float32)\na = np.array([1., 2., 3.])\n\n# Convert the array to a torch tensor\nt = torch.tensor(a)\n\nprint(\"NumPy array: {}, type: {}\".format(a, a.dtype))\nprint(\"Torch tensor: {}, type: {}\".format(t, t.dtype))",
"NumPy array: [1. 2. 3.], type: float64\nTorch tensor: tensor([1., 2., 3.], dtype=torch.float64), type: torch.float64\n"
]
],
[
[
"We can also seamlessly convert a PyTorch tensor into a NumPy array:",
"_____no_output_____"
]
],
[
[
"t.numpy()",
"_____no_output_____"
]
],
[
[
"**Side note**: why not `torch.from_numpy(a)`? The `from_numpy()` method is depecrated in favor of `tensor()`, which is a more capable method in the torch package. `from_numpy()` is only there for backwards compatibility. It can be a little bit quirky, so I recommend using the newer method in PyTorch >= 0.4.",
"_____no_output_____"
],
[
"#### Indexing",
"_____no_output_____"
],
[
"\n\nIndexing works as expected with NumPy:",
"_____no_output_____"
]
],
[
[
"t = torch.randn(2, 3)\nt[ : , 0]",
"_____no_output_____"
]
],
[
[
"PyTorch also supports indexing using long tensors, for example:",
"_____no_output_____"
]
],
[
[
"t = torch.randn(5, 6)\nprint(t)\ni = torch.tensor([1, 3])\nj = torch.tensor([4, 5])\nprint(t[i]) # selects rows 1 and 3\nprint(t[i, j]) # selects (1, 4) and (3, 5)",
"tensor([[ 0.0421, 0.0713, -2.1790, 0.3855, 1.8714, -0.2528],\n [-0.3344, 0.8028, -0.2878, -0.2721, 1.0514, -1.3336],\n [-1.3506, 0.5657, 1.4540, -0.7039, -0.6878, 0.9614],\n [ 0.3845, 0.4493, -0.8910, -1.4512, 0.1300, 1.5551],\n [ 0.9192, 0.3812, -1.3167, 0.4005, -0.0778, 1.5110]])\ntensor([[-0.3344, 0.8028, -0.2878, -0.2721, 1.0514, -1.3336],\n [ 0.3845, 0.4493, -0.8910, -1.4512, 0.1300, 1.5551]])\ntensor([1.0514, 1.5551])\n"
]
],
[
[
"#### Type conversion",
"_____no_output_____"
],
[
"Each tensor has a set of convenient methods to convert types. For example, if we want to convert the tensor above to a 32-bit float tensor, we use the method `.float()`:",
"_____no_output_____"
]
],
[
[
"t = t.float() # converts to 32-bit float\nprint(t)\nt = t.double() # converts to 64-bit float\nprint(t)\nt = t.byte() # converts to unsigned 8-bit integer\nprint(t)",
"tensor([[ 0.0421, 0.0713, -2.1790, 0.3855, 1.8714, -0.2528],\n [-0.3344, 0.8028, -0.2878, -0.2721, 1.0514, -1.3336],\n [-1.3506, 0.5657, 1.4540, -0.7039, -0.6878, 0.9614],\n [ 0.3845, 0.4493, -0.8910, -1.4512, 0.1300, 1.5551],\n [ 0.9192, 0.3812, -1.3167, 0.4005, -0.0778, 1.5110]])\ntensor([[ 0.0421, 0.0713, -2.1790, 0.3855, 1.8714, -0.2528],\n [-0.3344, 0.8028, -0.2878, -0.2721, 1.0514, -1.3336],\n [-1.3506, 0.5657, 1.4540, -0.7039, -0.6878, 0.9614],\n [ 0.3845, 0.4493, -0.8910, -1.4512, 0.1300, 1.5551],\n [ 0.9192, 0.3812, -1.3167, 0.4005, -0.0778, 1.5110]],\n dtype=torch.float64)\ntensor([[ 0, 0, 254, 0, 1, 0],\n [ 0, 0, 0, 0, 1, 255],\n [255, 0, 1, 0, 0, 0],\n [ 0, 0, 0, 255, 0, 1],\n [ 0, 0, 255, 0, 0, 1]], dtype=torch.uint8)\n"
]
],
[
[
"### Operations on tensors",
"_____no_output_____"
],
[
"Now that we know how to create tensors, let's create some of the fundamental tensors and see some common operations on them:",
"_____no_output_____"
]
],
[
[
"# Scalars =: creates a tensor with a scalar\n# (zero-th order tensor, i.e. just a number)\ns = torch.tensor(42)\nprint(s)",
"tensor(42)\n"
]
],
[
[
"**Tip**: a very convenient to access scalars is with `.item()`:",
"_____no_output_____"
]
],
[
[
"s.item()",
"_____no_output_____"
]
],
[
[
"Let's see higher-order tensors – remember we can always inspect the dimensionality of a tensor using the `.size()` method:",
"_____no_output_____"
]
],
[
[
"# Row vector\nx = torch.randn(1,3)\nprint(\"Row vector\\n{}\\nwith size {}\".format(x, x.size()))\n\n# Column vector\nv = torch.randn(3,1)\nprint(\"Column vector\\n{}\\nwith size {}\".format(v, v.size()))\n\n# Matrix\nA = torch.randn(3, 3)\nprint(\"Matrix\\n{}\\nwith size {}\".format(A, A.size()))",
"Row vector\ntensor([[ 1.5811, -0.0148, -2.1993]])\nwith size torch.Size([1, 3])\nColumn vector\ntensor([[-0.5085],\n [ 1.1261],\n [ 0.2023]])\nwith size torch.Size([3, 1])\nMatrix\ntensor([[-1.6138, 0.5316, -0.6007],\n [ 0.8608, -0.9427, 0.0111],\n [-0.7028, -1.2223, 1.1992]])\nwith size torch.Size([3, 3])\n"
]
],
[
[
"A common operation is matrix-vector multiplication (and in general tensor-tensor multiplication). For example, the product $\\mathbf{A}\\mathbf{v} + \\mathbf{b}$ is as follows:",
"_____no_output_____"
]
],
[
[
"u = torch.matmul(A, v)\nprint(u)\nb = torch.randn(3,1)\ny = u + b # we can also do torch.add(u, b)\nprint(y)",
"tensor([[ 1.2978],\n [-1.4971],\n [-0.7765]])\ntensor([[ 0.6784],\n [-2.1156],\n [ 0.5897]])\n"
]
],
[
[
"where we retrieve the expected result (a column vector of dimensions 3x1). We can of course compose operations:",
"_____no_output_____"
]
],
[
[
"s = torch.matmul(x, torch.matmul(A, v))\nprint(s.item())",
"3.781867027282715\n"
]
],
[
[
"There are many functions implemented for every tensor, and we encourage you to study the documentation. Some of the most common ones:",
"_____no_output_____"
]
],
[
[
"# common tensor methods (they also have the counterpart in\n# the torch package, e.g. as torch.sum(t))\nt = torch.randn(2,3)\nt.sum(dim=0)\nt.t() # transpose\nt.numel() # number of elements in tensor\nt.nonzero() # indices of non-zero elements\nt.view(-1, 2) # reorganizes the tensor to these dimensions\nt.squeeze() # removes size 1 dimensions\nt.unsqueeze(0) # inserts a dimension\n\n# operations in the package\ntorch.arange(0, 10) # tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\ntorch.eye(3, 3) # creates a 3x3 matrix with 1s in the diagonal (identity in this case)\nt = torch.arange(0, 3)\ntorch.cat((t, t)) # tensor([0, 1, 2, 0, 1, 2])\ntorch.stack((t, t)) # tensor([[0, 1, 2],\n # [0, 1, 2]])",
"_____no_output_____"
]
],
[
[
"## GPU computation",
"_____no_output_____"
],
[
"Deep Learning frameworks take advantage of the powerful computational capabilities of modern graphic processing units (GPUs). GPUs were originally designed to perform frequent operations for graphics very efficiently and fast, such as linear algebra operations, which makes them ideal for our interests. PyTorch makes it very easy to use the GPU: the common scenario is to tell the framework that we want to instantiate a tensor with a type that makes it a GPU tensor, or move a given CPU tensor to the GPU. All the tensors that we have seen above are CPU tensors, and PyTorch has the counterparts for GPU tensors in the `torch.cuda` module. Let's see how this works.\n\nA common way to explicitly declare the tensor type as a GPU tensor is through the use of the constructor method for tensor creation inside the `torch.cuda` module:",
"_____no_output_____"
]
],
[
[
"try:\n t_gpu = torch.cuda.FloatTensor(3, 3) # creation of a GPU tensor\n t_gpu.zero_() # initialization to zero\nexcept TypeError as err:\n print(err)",
"type torch.cuda.FloatTensor not available. Torch not compiled with CUDA enabled.\n"
]
],
[
[
"However, a more common approach that gives us flexibility is through the use of devices. A device in PyTorch refers to either the CPU (indicated by the string \"cpu\") or one of the possible GPU cards in the machine (indicated by the string \"cuda:$n$\", where $n$ is the index of the card). Let's create a random gaussian matrix using a method from the `torch` package, and set the computational device to be the GPU by specifying the `device` to be `cuda:0`, the first GPU card in our machine (this code will fail if you don't have a GPU, but we will work around that below): ",
"_____no_output_____"
]
],
[
[
"try:\n t_gpu = torch.randn(3, 3, device=\"cuda:0\")\nexcept AssertionError as err:\n print(err)\n t_gpu = None\n\nt_gpu",
"Torch not compiled with CUDA enabled\n"
]
],
[
[
"As you can notice, the tensor now has the explicit device set to be a CUDA device, not a CPU device. Let's now create a tensor in the CPU and move it to the GPU:\n ",
"_____no_output_____"
]
],
[
[
"# we could also state explicitly the device to be the\n# CPU with torch.randn(3,3,device=\"cpu\")\nt = torch.randn(3, 3)\nt",
"_____no_output_____"
]
],
[
[
"In this case, the device is the CPU, but PyTorch does not explicitly say that given that this is the default behavior. To copy the tensor to the GPU we use the `.to()` method that every tensor implements, passing the device as an argument. This method creates a copy in the specified device or, if the tensor already resides in that device, it returns the original tensor ([Lapan 2018](#References)): ",
"_____no_output_____"
]
],
[
[
"try:\n t_gpu = t.to(\"cuda:0\") # copies the tensor from CPU to GPU\n # note that if we do now t_to_gpu.to(\"cuda:0\") it will\n # return the same tensor without doing anything else\n # as this tensor already resides on the GPU\n print(t_gpu)\n print(t_gpu.device)\nexcept AssertionError as err:\n print(err)",
"Torch not compiled with CUDA enabled\n"
]
],
[
[
"**Tip**: When we program PyTorch models, we will have to specify the device in several places (not so many, but definitely more than once). A good practice that is consistent accross the implementation and makes the code more portable is to declare early in the code a device variable by querying the framework if there is a GPU available that we can use. We can do this by writing",
"_____no_output_____"
]
],
[
[
"device = torch.device(\"cuda:0\") if torch.cuda.is_available() else torch.device(\"cpu\")\nprint(device)",
"cpu\n"
]
],
[
[
"We can then use `device` as an argument of the `.to()` method in the rest of our code:",
"_____no_output_____"
]
],
[
[
"# moves t to the device (this code will **not** fail if the\n# local machine has not access to a GPU)\nt.to(device)",
"_____no_output_____"
]
],
[
[
"**Side note**: having good GPU backend support is a critical aspect of a deep learning framework. Some models depend crucially on performing computations on a GPU. Most frameworks, including PyTorch, only provide good support for GPUs manufactured by Nvidia. This is mostly due to the heavy investment this company made on CUDA (Compute Unified Device Architecture), the underlying parallel computing platform that enables this type of scientific computing (and the reason for the device label), with specific implementations targeted to Deep Neural Networks as cuDNN. Other GPU manufacturers, most notably AMD, are making efforts to towards enabling ML computing in their cards, but their support is still partial.",
"_____no_output_____"
],
[
"## Neural network foundations",
"_____no_output_____"
],
[
"Computing gradients is a crucial feature in deep learning, given that the training procedure of neural networks relies on optimization techniques that update the parameters of the model by using the gradient information of a scalar magnitude – the loss function. How is it possible to compute the derivatives? There are different methods, namely\n\n- **Symbolic Differentiation**: given a symbolic expression, the software provides the derivative by performing symbolic transformations (e.g. Wolfram Alpha). The benefits are clear, but it is not always possible to compute an analytical expression.\n\n- **Numerical Differentiation**: computes the derivatives using expressions that are suitable to be evaluated numerically, using the finite differences method to several orders of approximation. A big drawback is that these methods are slow.\n\n- **Automatic Differentiation**: a library adds to the set of functional primitives an implementation of the derivative for each of these functions. Thus, if the library contains the function $sin(x)$, it also implements the derivative of this function, $\\frac{d}{dx}sin(x) = cos(x)$. Then, given a composition of functions, the library can compute the derivative with respect a variable by successive application of the chain rule, a method that is known in deep learning as backpropagation.",
"_____no_output_____"
],
[
"### Automatic differentiation",
"_____no_output_____"
],
[
"Modern deep learning libraries are capable of performing automatic differentiation. The two main approaches to computing the graph are _static_ and _dynamic_ processing ([Lapan 2018](#References)):\n\n- **Static graphs**: the deep learning framework converts the computational graph into a static representation that cannot be modified. This allows the library developers to do very aggressive optimizations on this static graph ahead of computation time, pruning some areas and transforming others so that the final product is highly optimized and fast. The drawback is that some models can be really hard to implement with this approach. For example, TensorFlow uses static graphs. Having static graphs is part of the reason why TensorFlow has excellent support for sequence processing, which makes it very popular in NLP.\n\n- **Dynamic graphs**: the framework does not create a graph ahead of computation, but records the operations that are performed, which can be quite different for different inputs. When it is time to compute the gradients, it unrolls the graph and perform the computations. A major benefit of this approach is that implementing complex models can be easier in this paradigm. This flexibility comes at the expense of the major drawback of this approach: speed. Dynamic graphs cannot leverage the same level of ahead-of-time optimization as static graphs, which makes them slower. PyTorch uses dynamic graphs as the underlying paradigm for gradient computation.",
"_____no_output_____"
],
[
"Here is simple graph to compute $y = wx + b$ (from [Rao and MacMahan 2019](#References-and-Further-Reading)):",
"_____no_output_____"
],
[
"<img src=\"fig/simple_computation_graph.png\" width=500 />",
"_____no_output_____"
],
[
"PyTorch computes the graph using the Autograd system. Autograd records a graph when performing the forward pass (function application), keeping track of all the tensors defined as inputs. These are the leaves of the graph. The output tensors are the roots of the graph. By navigating this graph from root to leaves, the gradients are automatically computed using the chain rule. In summary,\n\n- Forward pass (the successive function application) goes from leaves to root. We use the `apply` method in PyTorch.\n- Once the forward pass is completed, Autograd has recorded the graph and the backward pass (chain rule) can be done. We use the method `backwards` on the root of the graph.",
"_____no_output_____"
],
[
"### Modules",
"_____no_output_____"
],
[
"The base implementation for all neural network models in PyTorch is the class `Module` in the package `torch.nn`:",
"_____no_output_____"
]
],
[
[
"import torch.nn as nn",
"_____no_output_____"
]
],
[
[
"All our models subclass this base `nn.Module` class, which provides an interface to important methods used for constructing and working with our models, and which contains sensible initializations for our models. Modules can contain other modules (and usually do).\n\nLet's see a simple, custom implementation of a multi-layer feed forward network. In the example below, our simple mathematical model is\n\n$$\\mathbf{y} = \\mathbf{U}(f(\\mathbf{W}(\\mathbf{x})))$$\n\nwhere $f$ is a non-linear function (a `ReLU`), is directly translated into a similar expression in PyTorch. To do that, we simply subclass `nn.Module`, register the two affine transformations and the non-linearity, and implement their composition within the `forward` method:",
"_____no_output_____"
]
],
[
[
"class MyCustomModule(nn.Module):\n def __init__(self, n_inputs, n_hidden, n_output_classes):\n # call super to initialize the class above in the hierarchy\n super(MyCustomModule, self).__init__()\n # first affine transformation\n self.W = nn.Linear(n_inputs, n_hidden)\n # non-linearity (here it is also a layer!)\n self.f = nn.ReLU()\n # final affine transformation\n self.U = nn.Linear(n_hidden, n_output_classes)\n\n def forward(self, x):\n y = self.U(self.f(self.W(x)))\n return y",
"_____no_output_____"
]
],
[
[
"Then, we can use our new module as follows:",
"_____no_output_____"
]
],
[
[
"# set the network's architectural parameters\nn_inputs = 3\nn_hidden= 4\nn_output_classes = 2\n\n# instantiate the model\nmodel = MyCustomModule(n_inputs, n_hidden, n_output_classes)\n\n# create a simple input tensor\n# size is [1,3]: a mini-batch of one example,\n# this example having dimension 3\nx = torch.FloatTensor([[0.3, 0.8, -0.4]])\n\n# compute the model output by **applying** the input to the module\ny = model(x)\n\n# inspect the output\nprint(y)",
"tensor([[0.2473, 0.1775]], grad_fn=<AddmmBackward>)\n"
]
],
[
[
"As we see, the output is a tensor with its gradient function attached – Autograd tracks it for us.",
"_____no_output_____"
],
[
"**Tip**: modules overrides the `__call__()` method, where the framework does some work. Thus, instead of directly calling the `forward()` method, we apply the input to the model instead.",
"_____no_output_____"
],
[
"### Sequential",
"_____no_output_____"
],
[
"A powerful class in the `nn` package is `Sequential`, which allows us to express the code above more succinctly:",
"_____no_output_____"
]
],
[
[
"class MyCustomModule(nn.Module):\n def __init__(self, n_inputs, n_hidden, n_output_classes):\n super(MyCustomModule, self).__init__()\n self.network = nn.Sequential(\n nn.Linear(n_inputs, n_hidden),\n nn.ReLU(),\n nn.Linear(n_hidden, n_output_classes))\n\n def forward(self, x):\n y = self.network(x)\n return y",
"_____no_output_____"
]
],
[
[
"As you can imagine, this can be handy when we have a large number of layers for which the actual names are not that meaningful. It also improves readability:",
"_____no_output_____"
]
],
[
[
"class MyCustomModule(nn.Module):\n def __init__(self, n_inputs, n_hidden, n_output_classes):\n super(MyCustomModule, self).__init__()\n self.p_keep = 0.7\n self.network = nn.Sequential(\n nn.Linear(n_inputs, n_hidden),\n nn.ReLU(),\n nn.Linear(n_hidden, 2*n_hidden),\n nn.ReLU(),\n nn.Linear(2*n_hidden, n_output_classes),\n # dropout argument is probability of dropping\n nn.Dropout(1 - self.p_keep),\n # applies softmax in the data dimension\n nn.Softmax(dim=1)\n )\n\n def forward(self, x):\n y = self.network(x)\n return y",
"_____no_output_____"
]
],
[
[
"**Side note**: Another important package in `torch.nn` is `Functional`, typically imported as `F`. Functional contains many useful functions, from non-linear activations to convolutional, dropout, and even distance functions. Many of these functions have counterpart implementations as layers in the `nn` package so that they can be easily used in pipelines like the one above implemented using `nn.Sequential`.",
"_____no_output_____"
]
],
[
[
"import torch.nn.functional as F\n\ny = F.relu(torch.FloatTensor([[-5, -1, 0, 5]]))\n\ny",
"_____no_output_____"
]
],
[
[
"### Criteria and loss functions",
"_____no_output_____"
],
[
"PyTorch has implementations for the most common criteria in the `torch.nn` package. You may notice that, as with many of the other functions, there are two implementations of loss functions: the reference functions in `torch.nn.functional` and practical class in `torch.nn`, which are the ones we typically use. Probably the two most common ones are ([Lapan 2018](#References)):\n\n- `nn.MSELoss` (mean squared error): squared $L_2$ norm used for regression.\n- `nn.CrossEntropyLoss`: criterion used for classification as the result of combining `nn.LogSoftmax()` and `nn.NLLLoss()` (negative log likelihood), operating on the input scores directly. When possible, we recommend using this class instead of using a softmax layer plus a log conversion and `nn.NLLLoss`, given that the `LossSoftmax` implementation guards against common numerical errors, resulting in less instabilities.\n\nOnce our model produces a prediction, we pass it to the criteria to obtain a measure of the loss:",
"_____no_output_____"
]
],
[
[
"# the true label (in this case, 2) from our dataset wrapped\n# as a tensor of minibatch size of 1\ny_gold = torch.tensor([1])\n\n# our simple classification criterion for this simple example\ncriterion = nn.CrossEntropyLoss()\n\n# forward pass of our model (remember, using apply instead of forward)\ny = model(x)\n\n# apply the criterion to get the loss corresponding to the pair (x, y)\n# with respect to the real y (y_gold)\nloss = criterion(y, y_gold)\n\n\n# the loss contains a gradient function that we can use to compute\n# the gradient dL/dw (gradient with respect to the parameters\n# for a given fixed input)\nprint(loss)",
"tensor(0.7287, grad_fn=<NllLossBackward>)\n"
]
],
[
[
"### Optimization",
"_____no_output_____"
],
[
"Once we have computed the loss for a training example or minibatch of examples, we update the parameters of the model guided by the information contained in the gradient. The role of updating the parameters belongs to the optimizer, and PyTorch has a number of implementations available right away – and if you don't find your preferred optimizer as part of the library, chances are that you will find an existing implementation. Also, coding your own optimizer is indeed quite easy in PyTorch.\n\n**Side Note** The following is a summary of the most common optimizers. It is intended to serve as a reference (I use this table myself quite a lot). In practice, most people pick an optimizer that has been proven to behave well on a given domain, but optimizers are also a very active area of research on numerical analysis, so it is a good idea to pay some attention to this subfield. We recommend using second-order dynamics with an adaptive time step:\n\n- First-order dynamics\n - Search direction only: `optim.SGD`\n - Adaptive: `optim.RMSprop`, `optim.Adagrad`, `optim.Adadelta`\n \n- Second-order dynamics\n - Search direction only: Momentum `optim.SGD(momentum=0.9)`, Nesterov, `optim.SGD(nesterov=True)`\n - Adaptive: `optim.Adam`, `optim.Adamax` (Adam with $L_\\infty$)",
"_____no_output_____"
],
[
"### Training a simple model",
"_____no_output_____"
],
[
"In order to illustrate the different concepts and techniques above, let's put them together in a very simple example: our objective will be to fit a very simple non-linear function, a sine wave:\n\n$$y = a \\sin(x + \\phi)$$\n\nwhere $a, \\phi$ are the given amplitude and phase of the sine function. Our objective is to learn to adjust this function using a feed forward network, this is:\n\n$$ \\hat{y} = f(x)$$\n\nsuch that the error between $y$ and $\\hat{y}$ is minimal according to our criterion. A natural criterion is to minimize the squared distance between the actual value of the sine wave and the value predicted by our function approximator, measured using the $L_2$ norm.\n\n**Side Note**: Although this example is easy, simple variations of this setting can pose a big challenge, and are used currently to illustrate difficult problems in learning, especially in a very active subfield known as meta-learning.",
"_____no_output_____"
],
[
"Let's import all the modules that we are going to need:",
"_____no_output_____"
]
],
[
[
"import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.utils.data as data\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport math",
"_____no_output_____"
]
],
[
[
"Early on the code, we define the device that we want to use:",
"_____no_output_____"
]
],
[
[
"device = torch.device(\"cuda:0\") if torch.cuda.is_available() else torch.device(\"cpu\")",
"_____no_output_____"
]
],
[
[
"Let's fix $a=1$, $\\phi=1$ and generate traning data in the interval $x \\in [0,2\\pi)$ using NumPy:",
"_____no_output_____"
]
],
[
[
"M = 1200\n\n# sample from the x axis M points\nx = np.random.rand(M) * 2*math.pi\n\n# add noise\neta = np.random.rand(M) * 0.01\n\n# compute the function\ny = np.sin(x) + eta\n\n# plot\n_ = plt.scatter(x,y)",
"_____no_output_____"
],
[
"# use the NumPy-PyTorch bridge\nx_train = torch.tensor(x[0:1000]).float().view(-1, 1).to(device)\ny_train = torch.tensor(y[0:1000]).float().view(-1, 1).to(device)\n\nx_test = torch.tensor(x[1000:]).float().view(-1, 1).to(device)\ny_test = torch.tensor(y[1000:]).float().view(-1, 1).to(device)",
"_____no_output_____"
],
[
"class SineDataset(data.Dataset):\n def __init__(self, x, y):\n super(SineDataset, self).__init__()\n assert x.shape[0] == y.shape[0]\n self.x = x\n self.y = y\n\n def __len__(self):\n return self.y.shape[0]\n\n def __getitem__(self, index):\n return self.x[index], self.y[index]\n\nsine_dataset = SineDataset(x_train, y_train)\n\nsine_dataset_test = SineDataset(x_test, y_test)\n\nsine_loader = torch.utils.data.DataLoader(\n sine_dataset, batch_size=32, shuffle=True)\n\nsine_loader_test = torch.utils.data.DataLoader(\n sine_dataset_test, batch_size=32)",
"_____no_output_____"
],
[
"class SineModel(nn.Module):\n def __init__(self):\n super(SineModel, self).__init__()\n self.network = nn.Sequential(\n nn.Linear(1, 5),\n nn.ReLU(),\n nn.Linear(5, 5),\n nn.ReLU(),\n nn.Linear(5, 5),\n nn.ReLU(),\n nn.Linear(5, 1))\n\n def forward(self, x):\n return self.network(x)",
"_____no_output_____"
],
[
"# declare the model\nmodel = SineModel().to(device)\n\n# define the criterion\ncriterion = nn.MSELoss()\n\n# select the optimizer and pass to it the parameters of the model it will optimize\noptimizer = torch.optim.Adam(model.parameters(), lr = 0.01)\n\nepochs = 1000\n\n# training loop\nfor epoch in range(epochs):\n for i, (x_i, y_i) in enumerate(sine_loader):\n\n y_hat_i = model(x_i) # forward pass\n\n loss = criterion(y_hat_i, y_i) # compute the loss and perform the backward pass\n\n optimizer.zero_grad() # cleans the gradients\n loss.backward() # computes the gradients\n optimizer.step() # update the parameters\n\n if epoch % 20:\n plt.scatter(x_i.data.cpu().numpy(), y_hat_i.data.cpu().numpy())",
"_____no_output_____"
],
[
"# testing\nwith torch.no_grad():\n model.eval()\n total_loss = 0.\n for k, (x_k, y_k) in enumerate(sine_loader_test):\n y_hat_k = model(x_k)\n loss_test = criterion(y_hat_k, y_k)\n total_loss += float(loss_test)\n\nprint(total_loss)",
"0.005281522491713986\n"
]
],
[
[
"## Reproducibility",
"_____no_output_____"
]
],
[
[
"def enforce_reproducibility(seed=42):\n # Sets seed manually for both CPU and CUDA\n torch.manual_seed(seed)\n # For atomic operations there is currently\n # no simple way to enforce determinism, as\n # the order of parallel operations is not known.\n #\n # CUDNN\n torch.backends.cudnn.deterministic = True\n torch.backends.cudnn.benchmark = False\n # System based\n np.random.seed(seed)\n\nenforce_reproducibility()",
"_____no_output_____"
]
],
[
[
"The function `utils.fix_random_seeds()` extends the above to the random seeds for NumPy and the Python `random` library.",
"_____no_output_____"
],
[
"## References",
"_____no_output_____"
],
[
"Lapan, Maxim (2018) *Deep Reinforcement Learning Hands-On*. Birmingham: Packt Publishing\n\nRao, Delip and Brian McMahan (2019) *Natural Language Processing with PyTorch*. Sebastopol, CA: O'Reilly Media",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
d0c384cac136970faa813f9c5ebc9853657e105b | 10,714 | ipynb | Jupyter Notebook | mod_07/mod_07_01.ipynb | merazlab/python | 43073779db177b50518f2708508f0375894eb254 | [
"MIT"
] | null | null | null | mod_07/mod_07_01.ipynb | merazlab/python | 43073779db177b50518f2708508f0375894eb254 | [
"MIT"
] | 1 | 2020-06-10T00:58:51.000Z | 2020-06-10T01:13:21.000Z | mod_07/mod_07_01.ipynb | merazlab/python | 43073779db177b50518f2708508f0375894eb254 | [
"MIT"
] | null | null | null | 10,714 | 10,714 | 0.680698 | [
[
[
"import builtins\n#help(builtins)",
"_____no_output_____"
],
[
"x = int(input(\"enter first no\"))\ny = int(input(\"enter second no\"))\n\nz = x / y\n\nprint(\"result\", z)",
"enter first no5\nenter second no0\n"
],
[
"x = int(input(\"enter first no\"))\ny = int(input(\"enter second no\"))\ntry:\n z = x / y\n print(\"result\", z)\nexcept(ZeroDivisionError):\n print(\"second value is not zero\")\n\nprint(\"end\")\n",
"enter first no5\nenter second no0\nsecond value is not zero\nend\n"
],
[
"try:\n x = int(input(\"enter first no\"))\n y = int(input(\"enter second no\"))\n\n z = x / y\n print(\"result\", z)\nexcept(ValueError):\n print(\"Value eroor\")\n\nprint(\"end\")\n",
"enter first no2\nenter second nof\nValue eroor\nend\n"
]
],
[
[
"Default exception block\n\nHandle any type off exception",
"_____no_output_____"
]
],
[
[
"",
"enter first no5\nenter second now\nerror occured\nend\n"
]
],
[
[
"Single try with multiple except block",
"_____no_output_____"
]
],
[
[
"try:\n x = int(input(\"enter first no\"))\n y = int(input(\"enter second no\"))\n\n z = x / y\n print(\"result\", z)\nexcept(ZeroDivisionError):\n print(\"second value is not zero\")\nexcept(ValueError):\n print(\"Value eroor\")\nexcept:\n print(\"error occured\")\n\nprint(\"end\")\n",
"enter first no7\nenter second nof\nValue eroor\nend\n"
]
],
[
[
"Raise exception\n\n-Default exception block is last block always",
"_____no_output_____"
]
],
[
[
"try:\n x = int(input(\"enter first no\"))\n y = int(input(\"enter second no\"))\n\n z = x / y\n print(\"result\", z)\n raise KeyError\nexcept(ZeroDivisionError):\n print(\"second value is not zero\")\nexcept(ValueError):\n print(\"Value eroor\")\nexcept:\n print(\"error occured\")\n\nprint(\"end\")",
"enter first no4\nenter second no3\nresult 1.3333333333333333\nerror occured\nend\n"
]
],
[
[
"##finally",
"_____no_output_____"
],
[
"The sets of statements which are compulsary to exectute whether exception is raised or not raised\n\nException- Only user frienly statement",
"_____no_output_____"
],
[
"Resource releasing statemnet are written in finally group",
"_____no_output_____"
]
],
[
[
"try:\n x = int(input(\"enter first no\"))\n y = int(input(\"enter second no\"))\n\n z = x / y\n print(\"result\", z)\nexcept(ZeroDivisionError):\n print(\"second value is not zero\")\nfinally: #run exception raised or not raised\n print(\"bye\")\n\nprint(\"end\")",
"enter first no6\nenter second no2\nresult 3.0\nbye\nend\n"
],
[
"x = open(\"myfile.txt\")\nprint(x.read())\nx.write(\"meraz\")\nx.close()\nprint(file is closed)",
"\n"
],
[
"",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0c38f2394b4f715e51ee8b8c7a0c72306e0a535 | 93,214 | ipynb | Jupyter Notebook | examples/Popups.ipynb | uni-3/folium | 67aab11039cd990d73fdf14566380286835ff84b | [
"MIT"
] | 5,451 | 2015-01-03T01:36:01.000Z | 2022-03-30T16:14:33.000Z | examples/Popups.ipynb | uni-3/folium | 67aab11039cd990d73fdf14566380286835ff84b | [
"MIT"
] | 1,341 | 2015-01-01T16:01:49.000Z | 2022-03-31T10:08:11.000Z | examples/Popups.ipynb | uni-3/folium | 67aab11039cd990d73fdf14566380286835ff84b | [
"MIT"
] | 2,510 | 2015-01-04T09:56:20.000Z | 2022-03-31T03:12:45.000Z | 245.3 | 39,731 | 0.940073 | [
[
[
"# How to create Popups\n\n## Simple popups\n\nYou can define your popup at the feature creation, but you can also overwrite them afterwards:",
"_____no_output_____"
]
],
[
[
"import folium\n\n\nm = folium.Map([45, 0], zoom_start=4)\n\nfolium.Marker([45, -30], popup=\"inline implicit popup\").add_to(m)\n\nfolium.CircleMarker(\n location=[45, -10],\n radius=25,\n fill=True,\n popup=folium.Popup(\"inline explicit Popup\"),\n).add_to(m)\n\nls = folium.PolyLine(\n locations=[[43, 7], [43, 13], [47, 13], [47, 7], [43, 7]], color=\"red\"\n)\n\nls.add_child(folium.Popup(\"outline Popup on Polyline\"))\nls.add_to(m)\n\ngj = folium.GeoJson(\n data={\"type\": \"Polygon\", \"coordinates\": [[[27, 43], [33, 43], [33, 47], [27, 47]]]}\n)\n\ngj.add_child(folium.Popup(\"outline Popup on GeoJSON\"))\ngj.add_to(m)\n\nm",
"_____no_output_____"
],
[
"m = folium.Map([45, 0], zoom_start=2)\n\nfolium.Marker(\n location=[45, -10],\n popup=folium.Popup(\"Let's try quotes\", parse_html=True, max_width=100),\n).add_to(m)\n\nfolium.Marker(\n location=[45, -30],\n popup=folium.Popup(u\"Ça c'est chouette\", parse_html=True, max_width=\"100%\"),\n).add_to(m)\n\nm",
"_____no_output_____"
]
],
[
[
"## Vega Popup\n\nYou may know that it's possible to create awesome Vega charts with (or without) `vincent`. If you're willing to put one inside a popup, it's possible thanks to `folium.Vega`.",
"_____no_output_____"
]
],
[
[
"import json\n\nimport numpy as np\nimport vincent\n\nscatter_points = {\n \"x\": np.random.uniform(size=(100,)),\n \"y\": np.random.uniform(size=(100,)),\n}\n\n# Let's create the vincent chart.\nscatter_chart = vincent.Scatter(scatter_points, iter_idx=\"x\", width=600, height=300)\n\n# Let's convert it to JSON.\nscatter_json = scatter_chart.to_json()\n\n# Let's convert it to dict.\nscatter_dict = json.loads(scatter_json)",
"_____no_output_____"
],
[
"m = folium.Map([43, -100], zoom_start=4)\n\npopup = folium.Popup()\nfolium.Vega(scatter_chart, height=350, width=650).add_to(popup)\nfolium.Marker([30, -120], popup=popup).add_to(m)\n\n# Let's create a Vega popup based on scatter_json.\npopup = folium.Popup(max_width=0)\nfolium.Vega(scatter_json, height=350, width=650).add_to(popup)\nfolium.Marker([30, -100], popup=popup).add_to(m)\n\n# Let's create a Vega popup based on scatter_dict.\npopup = folium.Popup(max_width=650)\nfolium.Vega(scatter_dict, height=350, width=650).add_to(popup)\nfolium.Marker([30, -80], popup=popup).add_to(m)\n\nm",
"_____no_output_____"
]
],
[
[
"## Fancy HTML popup",
"_____no_output_____"
]
],
[
[
"import branca\n\nm = folium.Map([43, -100], zoom_start=4)\n\nhtml = \"\"\"\n <h1> This is a big popup</h1><br>\n With a few lines of code...\n <p>\n <code>\n from numpy import *<br>\n exp(-2*pi)\n </code>\n </p>\n \"\"\"\n\n\nfolium.Marker([30, -100], popup=html).add_to(m)\n\nm",
"_____no_output_____"
]
],
[
[
"You can also put any HTML code inside of a Popup, thaks to the `IFrame` object.",
"_____no_output_____"
]
],
[
[
"m = folium.Map([43, -100], zoom_start=4)\n\nhtml = \"\"\"\n <h1> This popup is an Iframe</h1><br>\n With a few lines of code...\n <p>\n <code>\n from numpy import *<br>\n exp(-2*pi)\n </code>\n </p>\n \"\"\"\n\niframe = branca.element.IFrame(html=html, width=500, height=300)\npopup = folium.Popup(iframe, max_width=500)\n\nfolium.Marker([30, -100], popup=popup).add_to(m)\n\nm",
"_____no_output_____"
],
[
"import pandas as pd\n\ndf = pd.DataFrame(\n data=[[\"apple\", \"oranges\"], [\"other\", \"stuff\"]], columns=[\"cats\", \"dogs\"]\n)\n\nm = folium.Map([43, -100], zoom_start=4)\n\nhtml = df.to_html(\n classes=\"table table-striped table-hover table-condensed table-responsive\"\n)\n\npopup = folium.Popup(html)\n\nfolium.Marker([30, -100], popup=popup).add_to(m)\n\nm",
"_____no_output_____"
]
],
[
[
"Note that you can put another `Figure` into an `IFrame` ; this should let you do stange things...",
"_____no_output_____"
]
],
[
[
"# Let's create a Figure, with a map inside.\nf = branca.element.Figure()\nfolium.Map([-25, 150], zoom_start=3).add_to(f)\n\n# Let's put the figure into an IFrame.\niframe = branca.element.IFrame(width=500, height=300)\nf.add_to(iframe)\n\n# Let's put the IFrame in a Popup\npopup = folium.Popup(iframe, max_width=2650)\n\n# Let's create another map.\nm = folium.Map([43, -100], zoom_start=4)\n\n# Let's put the Popup on a marker, in the second map.\nfolium.Marker([30, -100], popup=popup).add_to(m)\n\n# We get a map in a Popup. Not really useful, but powerful.\nm",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c3941eb9fe19a69b8309244205a659d2684690 | 542,660 | ipynb | Jupyter Notebook | site/en-snapshot/probability/examples/Eight_Schools.ipynb | NarimaneHennouni/docs-l10n | 39a48e0d5aa34950e29efd5c1f111c120185e9d9 | [
"Apache-2.0"
] | 2 | 2020-09-29T07:31:21.000Z | 2020-10-13T08:16:18.000Z | site/en-snapshot/probability/examples/Eight_Schools.ipynb | NarimaneHennouni/docs-l10n | 39a48e0d5aa34950e29efd5c1f111c120185e9d9 | [
"Apache-2.0"
] | null | null | null | site/en-snapshot/probability/examples/Eight_Schools.ipynb | NarimaneHennouni/docs-l10n | 39a48e0d5aa34950e29efd5c1f111c120185e9d9 | [
"Apache-2.0"
] | null | null | null | 828.48855 | 219,308 | 0.935258 | [
[
[
"##### Copyright 2018 The TensorFlow Probability Authors.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\"); { display-mode: \"form\" }\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# Eight schools\n\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/probability/examples/Eight_Schools\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Eight_Schools.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Eight_Schools.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Eight_Schools.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"The eight schools problem ([Rubin 1981](https://www.jstor.org/stable/1164617)) considers the effectiveness of SAT coaching programs conducted in parallel at eight schools. It has become a classic problem ([Bayesian Data Analysis](http://www.stat.columbia.edu/~gelman/book/), [Stan](https://github.com/stan-dev/rstan/wiki/RStan-Getting-Started)) that illustrates the usefulness of hierarchical modeling for sharing information between exchangeable groups.\n\nThe implemention below is an adaptation of an Edward 1.0 [tutorial](https://github.com/blei-lab/edward/blob/master/notebooks/eight_schools.ipynb).",
"_____no_output_____"
],
[
"# Imports",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns\n\nimport tensorflow.compat.v2 as tf\nimport tensorflow_probability as tfp\nfrom tensorflow_probability import distributions as tfd\nimport warnings\n\ntf.enable_v2_behavior()\n\nplt.style.use(\"ggplot\")\nwarnings.filterwarnings('ignore')",
"_____no_output_____"
]
],
[
[
"# The Data\n\nFrom Bayesian Data Analysis, section 5.5 (Gelman et al. 2013):\n\n> *A study was performed for the Educational Testing Service to analyze the effects of special coaching programs for SAT-V (Scholastic Aptitude Test-Verbal) in each of eight high schools. The outcome variable in each study was the score on a special administration of the SAT-V, a standardized multiple choice test administered by the Educational Testing Service and used to help colleges make admissions decisions; the scores can vary between 200 and 800, with mean about 500 and standard deviation about 100. The SAT examinations are designed to be resistant to short-term efforts directed specifically toward improving performance on the test; instead they are designed to reflect knowledge acquired and abilities developed over many years of education. Nevertheless, each of the eight schools in this study considered its short-term coaching program to be very successful at increasing SAT scores. Also, there was no prior reason to believe that any of the eight programs was more effective than any other or that some were more similar in effect to each other than to any other.*\n\n\nFor each of the eight schools ($J = 8$), we have an estimated treatment effect $y_j$ and a standard error of the effect estimate $\\sigma_j$. The treatment effects in the study were obtained by a linear regression on the treatment group using PSAT-M and PSAT-V scores as control variables. As there was no prior belief that any of the schools were more or less similar or that any of the coaching programs would be more effective, we can consider the treatment effects as [exchangeable](https://en.wikipedia.org/wiki/Exchangeable_random_variables).",
"_____no_output_____"
]
],
[
[
"num_schools = 8 # number of schools\ntreatment_effects = np.array(\n [28, 8, -3, 7, -1, 1, 18, 12], dtype=np.float32) # treatment effects\ntreatment_stddevs = np.array(\n [15, 10, 16, 11, 9, 11, 10, 18], dtype=np.float32) # treatment SE\n\nfig, ax = plt.subplots()\nplt.bar(range(num_schools), treatment_effects, yerr=treatment_stddevs)\nplt.title(\"8 Schools treatment effects\")\nplt.xlabel(\"School\")\nplt.ylabel(\"Treatment effect\")\nfig.set_size_inches(10, 8)\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Model\n\nTo capture the data, we use a hierarchical normal model. It follows the generative process,\n\n$$\n\\begin{align*}\n\\mu &\\sim \\text{Normal}(\\text{loc}{=}0,\\ \\text{scale}{=}10) \\\\\n\\log\\tau &\\sim \\text{Normal}(\\text{loc}{=}5,\\ \\text{scale}{=}1) \\\\\n\\text{for } & i=1\\ldots 8:\\\\\n& \\theta_i \\sim \\text{Normal}\\left(\\text{loc}{=}\\mu,\\ \\text{scale}{=}\\tau \\right) \\\\\n& y_i \\sim \\text{Normal}\\left(\\text{loc}{=}\\theta_i,\\ \\text{scale}{=}\\sigma_i \\right)\n\\end{align*}\n$$\n\nwhere $\\mu$ represents the prior average treatment effect and $\\tau$ controls how much variance there is between schools. The $y_i$ and $\\sigma_i$ are observed. As $\\tau \\rightarrow \\infty$, the model approaches the no-pooling model, i.e., each of the school treatment effect estimates are allowed to be more independent. As $\\tau \\rightarrow 0$, the model approaches the complete-pooling model, i.e., all of the school treatment effects are closer to the group average $\\mu$. To restrict the standard deviation to be positive, we draw $\\tau$ from a lognormal distribution (which is equivalent to drawing $log(\\tau)$ from a normal distribution).\n\nFollowing [Diagnosing Biased Inference with Divergences](http://mc-stan.org/users/documentation/case-studies/divergences_and_bias.html), we transform the model above into an equivalent non-centered model:\n\n$$\n\\begin{align*}\n\\mu &\\sim \\text{Normal}(\\text{loc}{=}0,\\ \\text{scale}{=}10) \\\\\n\\log\\tau &\\sim \\text{Normal}(\\text{loc}{=}5,\\ \\text{scale}{=}1) \\\\\n\\text{for } & i=1\\ldots 8:\\\\\n& \\theta_i' \\sim \\text{Normal}\\left(\\text{loc}{=}0,\\ \\text{scale}{=}1 \\right) \\\\\n& \\theta_i = \\mu + \\tau \\theta_i' \\\\\n& y_i \\sim \\text{Normal}\\left(\\text{loc}{=}\\theta_i,\\ \\text{scale}{=}\\sigma_i \\right) \n\\end{align*}\n$$\n\nWe reify this model as a [JointDistributionSequential](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions/JointDistributionSequential) instance:",
"_____no_output_____"
]
],
[
[
"model = tfd.JointDistributionSequential([\n tfd.Normal(loc=0., scale=10., name=\"avg_effect\"), # `mu` above\n tfd.Normal(loc=5., scale=1., name=\"avg_stddev\"), # `log(tau)` above\n tfd.Independent(tfd.Normal(loc=tf.zeros(num_schools),\n scale=tf.ones(num_schools),\n name=\"school_effects_standard\"), # `theta_prime` \n reinterpreted_batch_ndims=1),\n lambda school_effects_standard, avg_stddev, avg_effect: (\n tfd.Independent(tfd.Normal(loc=(avg_effect[..., tf.newaxis] +\n tf.exp(avg_stddev[..., tf.newaxis]) *\n school_effects_standard), # `theta` above\n scale=treatment_stddevs),\n name=\"treatment_effects\", # `y` above\n reinterpreted_batch_ndims=1))\n])\n\ndef target_log_prob_fn(avg_effect, avg_stddev, school_effects_standard):\n \"\"\"Unnormalized target density as a function of states.\"\"\"\n return model.log_prob((\n avg_effect, avg_stddev, school_effects_standard, treatment_effects))",
"_____no_output_____"
]
],
[
[
"# Bayesian Inference\n\nGiven data, we perform Hamiltonian Monte Carlo (HMC) to calculate the posterior distribution over the model's parameters.",
"_____no_output_____"
]
],
[
[
"num_results = 5000\nnum_burnin_steps = 3000\n\n# Improve performance by tracing the sampler using `tf.function`\n# and compiling it using XLA.\[email protected](autograph=False, experimental_compile=True)\ndef do_sampling():\n return tfp.mcmc.sample_chain(\n num_results=num_results,\n num_burnin_steps=num_burnin_steps,\n current_state=[\n tf.zeros([], name='init_avg_effect'),\n tf.zeros([], name='init_avg_stddev'),\n tf.ones([num_schools], name='init_school_effects_standard'),\n ],\n kernel=tfp.mcmc.HamiltonianMonteCarlo(\n target_log_prob_fn=target_log_prob_fn,\n step_size=0.4,\n num_leapfrog_steps=3))\n\nstates, kernel_results = do_sampling()\n\navg_effect, avg_stddev, school_effects_standard = states\n\nschool_effects_samples = (\n avg_effect[:, np.newaxis] +\n np.exp(avg_stddev)[:, np.newaxis] * school_effects_standard)\n\nnum_accepted = np.sum(kernel_results.is_accepted)\nprint('Acceptance rate: {}'.format(num_accepted / num_results))",
"Acceptance rate: 0.5974\n"
],
[
"fig, axes = plt.subplots(8, 2, sharex='col', sharey='col')\nfig.set_size_inches(12, 10)\nfor i in range(num_schools):\n axes[i][0].plot(school_effects_samples[:,i].numpy())\n axes[i][0].title.set_text(\"School {} treatment effect chain\".format(i))\n sns.kdeplot(school_effects_samples[:,i].numpy(), ax=axes[i][1], shade=True)\n axes[i][1].title.set_text(\"School {} treatment effect distribution\".format(i))\naxes[num_schools - 1][0].set_xlabel(\"Iteration\")\naxes[num_schools - 1][1].set_xlabel(\"School effect\")\nfig.tight_layout()\nplt.show()",
"_____no_output_____"
],
[
"print(\"E[avg_effect] = {}\".format(np.mean(avg_effect)))\nprint(\"E[avg_stddev] = {}\".format(np.mean(avg_stddev)))\nprint(\"E[school_effects_standard] =\")\nprint(np.mean(school_effects_standard[:, ]))\nprint(\"E[school_effects] =\")\nprint(np.mean(school_effects_samples[:, ], axis=0))",
"E[avg_effect] = 5.57183933258\nE[avg_stddev] = 2.47738981247\nE[school_effects_standard] =\n0.08509017\nE[school_effects] =\n[15.0051 7.103311 2.4552586 6.2744603 1.3364682 3.1125953\n 12.762501 7.743602 ]\n"
],
[
"# Compute the 95% interval for school_effects\nschool_effects_low = np.array([\n np.percentile(school_effects_samples[:, i], 2.5) for i in range(num_schools)\n])\nschool_effects_med = np.array([\n np.percentile(school_effects_samples[:, i], 50) for i in range(num_schools)\n])\nschool_effects_hi = np.array([\n np.percentile(school_effects_samples[:, i], 97.5)\n for i in range(num_schools)\n])",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(nrows=1, ncols=1, sharex=True)\nax.scatter(np.array(range(num_schools)), school_effects_med, color='red', s=60)\nax.scatter(\n np.array(range(num_schools)) + 0.1, treatment_effects, color='blue', s=60)\n\nplt.plot([-0.2, 7.4], [np.mean(avg_effect),\n np.mean(avg_effect)], 'k', linestyle='--')\n\nax.errorbar(\n np.array(range(8)),\n school_effects_med,\n yerr=[\n school_effects_med - school_effects_low,\n school_effects_hi - school_effects_med\n ],\n fmt='none')\n\nax.legend(('avg_effect', 'HMC', 'Observed effect'), fontsize=14)\n\nplt.xlabel('School')\nplt.ylabel('Treatment effect')\nplt.title('HMC estimated school treatment effects vs. observed data')\nfig.set_size_inches(10, 8)\nplt.show()",
"_____no_output_____"
]
],
[
[
"We can observe the shrinkage toward the group `avg_effect` above.",
"_____no_output_____"
]
],
[
[
"print(\"Inferred posterior mean: {0:.2f}\".format(\n np.mean(school_effects_samples[:,])))\nprint(\"Inferred posterior mean se: {0:.2f}\".format(\n np.std(school_effects_samples[:,])))",
"Inferred posterior mean: 6.97\nInferred posterior mean se: 10.41\n"
]
],
[
[
"# Criticism\n\nTo get the posterior predictive distribution, i.e., a model of new data $y^*$ given the observed data $y$:\n\n$$ p(y^*|y) \\propto \\int_\\theta p(y^* | \\theta)p(\\theta |y)d\\theta$$\n\nwe override the values of the random variables in the model to set them to the mean of the posterior distribution, and sample from that model to generate new data $y^*$.",
"_____no_output_____"
]
],
[
[
"sample_shape = [5000]\n\n_, _, _, predictive_treatment_effects = model.sample(\n value=(tf.broadcast_to(np.mean(avg_effect, 0), sample_shape),\n tf.broadcast_to(np.mean(avg_stddev, 0), sample_shape),\n tf.broadcast_to(np.mean(school_effects_standard, 0),\n sample_shape + [num_schools]),\n None))",
"_____no_output_____"
],
[
"fig, axes = plt.subplots(4, 2, sharex=True, sharey=True)\nfig.set_size_inches(12, 10)\nfig.tight_layout()\nfor i, ax in enumerate(axes):\n sns.kdeplot(predictive_treatment_effects[:, 2*i].numpy(),\n ax=ax[0], shade=True)\n ax[0].title.set_text(\n \"School {} treatment effect posterior predictive\".format(2*i))\n sns.kdeplot(predictive_treatment_effects[:, 2*i + 1].numpy(),\n ax=ax[1], shade=True)\n ax[1].title.set_text(\n \"School {} treatment effect posterior predictive\".format(2*i + 1))\nplt.show()",
"_____no_output_____"
],
[
"# The mean predicted treatment effects for each of the eight schools.\nprediction = np.mean(predictive_treatment_effects, axis=0)",
"_____no_output_____"
]
],
[
[
"We can look at the residuals between the treatment effects data and the predictions of the model posterior. These correspond with the plot above which shows the shrinkage of the estimated effects toward the population average.",
"_____no_output_____"
]
],
[
[
"treatment_effects - prediction",
"_____no_output_____"
]
],
[
[
"Because we have a distribution of predictions for each school, we can consider the distribution of residuals as well.",
"_____no_output_____"
]
],
[
[
"residuals = treatment_effects - predictive_treatment_effects",
"_____no_output_____"
],
[
"fig, axes = plt.subplots(4, 2, sharex=True, sharey=True)\nfig.set_size_inches(12, 10)\nfig.tight_layout()\nfor i, ax in enumerate(axes):\n sns.kdeplot(residuals[:, 2*i].numpy(), ax=ax[0], shade=True)\n ax[0].title.set_text(\n \"School {} treatment effect residuals\".format(2*i))\n sns.kdeplot(residuals[:, 2*i + 1].numpy(), ax=ax[1], shade=True)\n ax[1].title.set_text(\n \"School {} treatment effect residuals\".format(2*i + 1))\nplt.show()",
"_____no_output_____"
]
],
[
[
"# Acknowledgements\n\nThis tutorial was originally written in Edward 1.0 ([source](https://github.com/blei-lab/edward/blob/master/notebooks/eight_schools.ipynb)). We thank all contributors to writing and revising that version.",
"_____no_output_____"
],
[
"# References\n1. Donald B. Rubin. Estimation in parallel randomized experiments. Journal of Educational Statistics, 6(4):377-401, 1981.\n2. Andrew Gelman, John Carlin, Hal Stern, David Dunson, Aki Vehtari, and Donald Rubin. Bayesian Data Analysis, Third Edition. Chapman and Hall/CRC, 2013.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d0c3b443c16891650e2744d65dc408c2aabfe130 | 8,968 | ipynb | Jupyter Notebook | src/models/Untitled.ipynb | dddTESTxx/Gym-Final | 4ce6eea0291fc4a4d886ce6b26c04b9d46d9aff8 | [
"MIT"
] | null | null | null | src/models/Untitled.ipynb | dddTESTxx/Gym-Final | 4ce6eea0291fc4a4d886ce6b26c04b9d46d9aff8 | [
"MIT"
] | null | null | null | src/models/Untitled.ipynb | dddTESTxx/Gym-Final | 4ce6eea0291fc4a4d886ce6b26c04b9d46d9aff8 | [
"MIT"
] | null | null | null | 23.6 | 101 | 0.396632 | [
[
[
"from sklearn import datasets\niris = datasets.load_iris()\ndigits = datasets.load_digits()",
"_____no_output_____"
],
[
"print(digits.data)",
"[[ 0. 0. 5. ..., 0. 0. 0.]\n [ 0. 0. 0. ..., 10. 0. 0.]\n [ 0. 0. 0. ..., 16. 9. 0.]\n ..., \n [ 0. 0. 1. ..., 6. 0. 0.]\n [ 0. 0. 2. ..., 12. 0. 0.]\n [ 0. 0. 10. ..., 12. 1. 0.]]\n"
],
[
"digits.target\n",
"_____no_output_____"
],
[
"digits.images[0]",
"_____no_output_____"
],
[
"from sklearn import svm",
"_____no_output_____"
],
[
"clf = svm.SVC(gamma=0.001, C=100.)",
"_____no_output_____"
],
[
"clf.fit(digits.data[:-1], digits.target[:-1])",
"_____no_output_____"
],
[
"clf.predict(digits.data[-1:])",
"_____no_output_____"
],
[
"import numpy as np\nimport pandas as pd\nimport statsmodels.formula.api as sm",
"_____no_output_____"
],
[
"gym = pd.read_csv('/Users/Dan/Downloads/Crowdedness gym/data.csv')",
"_____no_output_____"
],
[
"list(gym)",
"_____no_output_____"
],
[
"result = sm.ols(formula=\"number_people ~ is_weekend + temperature\", data=gym).fit()",
"_____no_output_____"
],
[
"print result.params",
"Intercept -47.654294\nis_weekend -9.155930\ntemperature 1.354520\ndtype: float64\n"
],
[
"print result.summary()",
" OLS Regression Results \n==============================================================================\nDep. Variable: number_people R-squared: 0.172\nModel: OLS Adj. R-squared: 0.172\nMethod: Least Squares F-statistic: 6476.\nDate: Wed, 07 Jun 2017 Prob (F-statistic): 0.00\nTime: 14:21:52 Log-Likelihood: -2.7648e+05\nNo. Observations: 62184 AIC: 5.530e+05\nDf Residuals: 62181 BIC: 5.530e+05\nDf Model: 2 \nCovariance Type: nonrobust \n===============================================================================\n coef std err t P>|t| [95.0% Conf. Int.]\n-------------------------------------------------------------------------------\nIntercept -47.6543 0.773 -61.674 0.000 -49.169 -46.140\nis_weekend -9.1559 0.184 -49.809 0.000 -9.516 -8.796\ntemperature 1.3545 0.013 103.339 0.000 1.329 1.380\n==============================================================================\nOmnibus: 3458.878 Durbin-Watson: 0.098\nProb(Omnibus): 0.000 Jarque-Bera (JB): 4060.444\nSkew: 0.616 Prob(JB): 0.00\nKurtosis: 3.223 Cond. No. 550.\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c3bb1dbfa4ecf9c61c7899f8a9253c6af60b89 | 5,507 | ipynb | Jupyter Notebook | Series.ipynb | MKamyab1991/Pandas_class | 8e3d1db17943ca91cd559a3dfa18961cadacf65b | [
"MIT"
] | 2 | 2022-02-25T22:02:53.000Z | 2022-02-26T02:20:35.000Z | Series.ipynb | MKamyab1991/Pandas_class | 8e3d1db17943ca91cd559a3dfa18961cadacf65b | [
"MIT"
] | null | null | null | Series.ipynb | MKamyab1991/Pandas_class | 8e3d1db17943ca91cd559a3dfa18961cadacf65b | [
"MIT"
] | null | null | null | 17.427215 | 81 | 0.428364 | [
[
[
"Create series in python",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nice_cream = ['vanilla','chocolate','mousse','rocky road']\n",
"_____no_output_____"
],
[
"pd.Series(ice_cream)",
"_____no_output_____"
],
[
"lottery = [1,2,8,4,5,22,8,9]\npd.Series(lottery)",
"_____no_output_____"
],
[
"registration = [True, False, False, True, True, True]\npd.Series(registration)",
"_____no_output_____"
],
[
"merriam = {\"Work\":\"Office\",\"Play\":\"Stadium\",\"Study\":\"Class\"}\npd.Series(merriam)",
"_____no_output_____"
],
[
"## Intro to Attributes",
"_____no_output_____"
],
[
"about_me = [\"Yo\",\"Yep\",\"thats\",\"awesomwe\"]\ns = pd.Series(about_me)\ns",
"_____no_output_____"
],
[
"s.values",
"_____no_output_____"
],
[
"s.index",
"_____no_output_____"
],
[
"s.dtype",
"_____no_output_____"
],
[
"prices = [1.44,6.8,9.65]\ns = pd.Series(prices)\ns",
"_____no_output_____"
],
[
"s.sum()",
"_____no_output_____"
],
[
"s.mean()",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c3bbc9a182932bf6093e957cb7210b123e3ae4 | 3,485 | ipynb | Jupyter Notebook | task_1_tabular.ipynb | DeepConnectAI/challenge-week-1 | 33c3fc296c28dfb75b1c47baef61ebfa877a3668 | [
"MIT"
] | 1 | 2020-08-11T05:03:36.000Z | 2020-08-11T05:03:36.000Z | task_1_tabular.ipynb | DeepConnectAI/challenge-week-1 | 33c3fc296c28dfb75b1c47baef61ebfa877a3668 | [
"MIT"
] | null | null | null | task_1_tabular.ipynb | DeepConnectAI/challenge-week-1 | 33c3fc296c28dfb75b1c47baef61ebfa877a3668 | [
"MIT"
] | 15 | 2020-08-10T15:59:25.000Z | 2020-08-20T09:14:00.000Z | 37.074468 | 455 | 0.657102 | [
[
[
"### Task Structured Tabular Data:\n\n#### Dataset Link:\nDataset can be found at \" /data/structured_data/data.csv \" in the respective challenge's repo.\n\n#### Description:\nTabular data is usually given in csv format (comma-separated-value). CSV files can be read and manipulated using pandas and numpy library in python. Most common datatypes in structured data are 'numerical' and 'categorical' data. Data processing is required to handle missing values, inconsistent string formats, missing commas, categorical variables and other different kinds of data inadequacies that you will get to experience in this course. \n\n#### Objective:\nHow to process and manipulate basic structured data for machine learning (Check out helpful links section to get hints)\n\n#### Tasks:\n- Load the csv file (pandas.read_csv function)\n- Classify columns into two groups - numerical and categorical. Print column names for each group.\n- Print first 10 rows after handling missing values\n- One-Hot encode the categorical data\n- Standarize or normalize the numerical columns\n\n#### Ask yourself:\n\n- Why do we need feature encoding and scaling techniques?\n- What is ordinal data and should we one-hot encode ordinal data? Are any better ways to encode it?\n- What's the difference between normalization and standardization? Which technique is most suitable for this sample dataset?\n- Can you solve the level-up challenge: Complete all the above tasks without using scikit-learn library ?\n\n#### Helpful Links:\n- Nice introduction to handle missing values: https://analyticsindiamag.com/5-ways-handle-missing-values-machine-learning-datasets/\n- Scikit-learn documentation for one hot encoding: https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html\n- Difference between normalization and standardization: https://medium.com/towards-artificial-intelligence/how-when-and-why-should-you-normalize-standardize-rescale-your-data-3f083def38ff",
"_____no_output_____"
]
],
[
[
"# Import the required libraries\n# Use terminal commands like \"pip install numpy\" to install packages\nimport numpy as np\nimport pandas as pd\n# import sklearn if and when required",
"_____no_output_____"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
]
] |
d0c3bddbf194a03addc5ce6ddc092986084000c6 | 33,650 | ipynb | Jupyter Notebook | notebooks/Python-in-2-days/D1_L3_Python/__Python_Summary__.ipynb | slamb89/ml_training | 011120d80baaadf0f4b69f86df805e4ed43c0b53 | [
"MIT"
] | null | null | null | notebooks/Python-in-2-days/D1_L3_Python/__Python_Summary__.ipynb | slamb89/ml_training | 011120d80baaadf0f4b69f86df805e4ed43c0b53 | [
"MIT"
] | null | null | null | notebooks/Python-in-2-days/D1_L3_Python/__Python_Summary__.ipynb | slamb89/ml_training | 011120d80baaadf0f4b69f86df805e4ed43c0b53 | [
"MIT"
] | null | null | null | 22.418388 | 898 | 0.483774 | [
[
[
"# Introduction to Python",
"_____no_output_____"
],
[
"In this lesson we will learn the basics of the Python programming language (version 3). We won't learn everything about Python but enough to do some basic machine learning.\n\n<img src=\"figures/python.png\" width=350>\n\n\n",
"_____no_output_____"
],
[
"# Variables",
"_____no_output_____"
],
[
"Variables are objects in Python that can hold anything with numbers or text. Let's look at how to create some variables.",
"_____no_output_____"
]
],
[
[
"# Numerical example\nx = 99\nprint (x)\n\nx = 27 # Added numerical value of 27\nprint (x)\n\nx=55 # Added numerical value of 55\nprint (x)\n\n\"\"\"changed numerical values of x\"\"\"",
"99\n27\n55\n"
],
[
"# Text example\nx = \"learning to code is fun\" # Changed text to \"learning to code is fun\" and \"tomorrow\"\nprint (x)\n\nx=\"tomorrow\"\nprint(x)\n\n\"\"\"changed sentences and value of x. Modified spacing to see if it altered the output\"\"\"",
"learning to code is fun\ntomorrow\n"
],
[
"# Variables can be used with each other\na = 2 # Changed values of a, b, and c\nb = 298\nc = a + b\nprint (c)\n\na = 3\nb = 4\nc = 27\nd = 22\ne = a + b + c + d\nprint (e)\n\n\"\"\"Changed values of a, b, and c. Created additional values for new variables.\"\"\"",
"300\n56\n"
]
],
[
[
"Variables can come in lots of different types. Even within numerical variables, you can have integers (int), floats (float), etc. All text based variables are of type string (str). We can see what type a variable is by printing its type.",
"_____no_output_____"
]
],
[
[
"# int variable\nx = 2\nprint (x)\nprint (type(x))\n\nx = 1\nprint (x)\nprint (type(x))\n\n# float variable\nx = 7.7\nprint (x)\nprint (type(x))\n\nx = 2.25\nprint (x)\nprint (type(x))\n\n# text variable\nx = \"hello Sheri\" \nprint (x)\nprint (type(x))\n\nx = \"Thunderstorms\"\nprint (x)\nprint (type(x))\n\n# boolean variable\nx = False\nprint (x)\nprint (type(x))\n\nx = True\nprint (x)\nprint (type(x))\n\n\"\"\"Created new values for the variable x\"\"\"",
"2\n<class 'int'>\n1\n<class 'int'>\n7.7\n<class 'float'>\n2.25\n<class 'float'>\nhello Sheri\n<class 'str'>\nThunderstorms\n<class 'str'>\nFalse\n<class 'bool'>\nTrue\n<class 'bool'>\n"
]
],
[
[
"It's good practice to know what types your variables are. When you want to use numerical operations on them, they need to be compatible. ",
"_____no_output_____"
]
],
[
[
"# int variables\na = 6\nb = 2\nprint (a + b)\n\n# string variables\na = \"6\"\nb = \"2\"\nprint (a + b)\n\na = \"4\"\nb = \"3\"\nc = \"5\"\nprint (a + b + c)\n\na = 4\nb = 3\nc = 5\nprint (a + b + c)\n\n\"\"\"Changed existing value of int and string variables. Created new variables\"\"\"",
"8\n62\n435\n12\n"
]
],
[
[
"# Lists",
"_____no_output_____"
],
[
"Lists are objects in Python that can hold a ordered sequence of numbers **and** text.",
"_____no_output_____"
]
],
[
[
"# Creating a list\nlist_x = [2, \"hello\", 1]\nprint (list_x)\n\nlist_a = [1, \"sheri lamb\", 4]\nprint (list_a)\n\n\"\"\"Created a new list a\"\"\"",
"[2, 'hello', 1]\n[1, 'sheri lamb', 4]\n"
],
[
"# Adding to a list\nlist_x.append(7)\nprint (list_x)\n\nlist_a.append(\"tomorrow\")\nprint (list_a)\n\n\"\"\"Added 'tomorrow' to my list\"\"\"",
"[2, 'hello', 1, 7, 7, 7]\n[1, 'sheri lamb', 4, 'tomorrow', 'tomorrow', 'tomorrow']\n"
],
[
"# Accessing items at specific location in a list\nprint (\"list_x[0]: \", list_x[0])\nprint (\"list_x[1]: \", list_x[1])\nprint (\"list_x[2]: \", list_x[2])\nprint (\"list_x[-1]: \", list_x[-1]) # the last item\nprint (\"list_x[-2]: \", list_x[-2]) # the second to last item\n\nprint (\"list_x[5]:\", list_x[5]) \n\n\"\"\"accessed item #5\"\"\"",
"list_x[0]: 1\nlist_x[1]: sheri lamb\nlist_x[2]: 4\nlist_x[-1]: tomorrow\nlist_x[-2]: 7\nlist_x[5]: 7\n"
],
[
"# Slicing\nprint (\"list_x[:]: \", list_x[:])\nprint (\"list_x[2:]: \", list_x[2:])\nprint (\"list_x[1:3]: \", list_x[1:3])\nprint (\"list_x[:-1]: \", list_x[:-1])\n\nprint (\"list_x[5:]: \", list_x[5:])\n\n\"\"\"added #5 to # slicing\"\"\"",
"list_x[:]: [1, 'sheri lamb', 4, 7, 'tomorrow', 7, 'tomorrow', 7, 'tomorrow']\nlist_x[2:]: [4, 7, 'tomorrow', 7, 'tomorrow', 7, 'tomorrow']\nlist_x[1:3]: ['sheri lamb', 4]\nlist_x[:-1]: [1, 'sheri lamb', 4, 7, 'tomorrow', 7, 'tomorrow', 7]\nlist_x[5:]: [7, 'tomorrow', 7, 'tomorrow']\n"
],
[
"# Length of a list\nlen(list_x)\nlen(list_x)\n\nlen(list_a)\n\"\"\"calculated the length of list_a\"\"\"",
"_____no_output_____"
],
[
"# Replacing items in a list\nlist_x[1] = \"hi\"\nprint (list_x)\n\nlist_a[1] = \"yes\"\nprint (list_a)\n\n\"\"\"replaced item 1 with yes\"\"\"",
"[2, 'hi', 1, 7, 7, 7]\n[1, 'yes', 4, 'tomorrow', 'tomorrow', 'tomorrow']\n"
],
[
"# Combining lists\nlist_y = [2.4, \"world\"]\nlist_z = list_x + list_y\nprint (list_z)\n\nlist_h = [1, 2,\"fire\"]\nlist_i = [4, 7, \"Stella\"]\nlist_4 = list_h + list_i\nprint (list_4)\n\n\"\"\"Created 2 new lists and combined them to create a third (list_4)\"\"\"",
"[2, 'hi', 1, 7, 7, 7, 2.4, 'world']\n[1, 2, 'fire', 4, 7, 'Stella']\n"
]
],
[
[
"# Tuples",
"_____no_output_____"
],
[
"Tuples are also objects in Python that can hold data but you cannot replace their values (for this reason, tuples are called immutable, whereas lists are known as mutable).",
"_____no_output_____"
]
],
[
[
"# Creating a tuple\ntuple_x = (3.0, \"hello\")\nprint (tuple_x)\n\ntuple_y = (5.0, \"Star\")\nprint (tuple_y)\n\n\"\"\"Created tuple y\"\"\"",
"(3.0, 'hello')\n(5.0, 'Star')\n"
],
[
"# Adding values to a tuple\ntuple_x = tuple_x + (5.6,)\nprint (tuple_x)\n\ntuple_z = tuple_y + (2.4,)\nprint (tuple_z)\n\n\"\"\"added 2.4 to tuple_z\"\"\"",
"(3.0, 'hello', 5.6, 5.6, 5.6, 5.6)\n(5.0, 'Star', 2.4)\n"
],
[
"# Trying to change a tuples value (you can't, this should produce an error.)\ntuple_x[1] = \"world\"\n\ntuple_z[1] = \"sunrise\"\n\n\"\"\"attempted to change the value of tuple_z\"\"\"",
"_____no_output_____"
]
],
[
[
"# Dictionaries",
"_____no_output_____"
],
[
"Dictionaries are Python objects that hold key-value pairs. In the example dictionary below, the keys are the \"name\" and \"eye_color\" variables. They each have a value associated with them. A dictionary cannot have two of the same keys. ",
"_____no_output_____"
]
],
[
[
"# Creating a dictionary\ndog = {\"name\": \"dog\",\n \"eye_color\": \"brown\"}\nprint (dog)\nprint (dog[\"name\"])\nprint (dog[\"eye_color\"])\n\nMAC = {\"brand\": \"MAC\", \"color\": \"red\"}\nprint (MAC)\nprint (MAC[\"brand\"])\nprint (MAC[\"color\"])\n\n\"\"\"Created a dictionary for MAC lipstick\"\"\"\n",
"{'name': 'dog', 'eye_color': 'brown'}\ndog\nbrown\n{'brand': 'MAC', 'color': 'red'}\nMAC\nred\n"
],
[
"# Changing the value for a key\ndog[\"eye_color\"] = \"green\"\nprint (dog)\n\nMAC[\"color\"] = \"pink\"\nprint (MAC)\n\n\"\"\"Changed the lipstick color from red to pink\"\"\"",
"{'name': 'dog', 'eye_color': 'green'}\n{'brand': 'MAC', 'color': 'pink'}\n"
],
[
"# Adding new key-value pairs\ndog[\"age\"] = 5\nprint (dog)\n\nMAC[\"age\"] = 1\nprint (MAC)\n\n\"\"\"Added an aditional value (age)\"\"\"",
"{'name': 'dog', 'eye_color': 'green', 'age': 5}\n{'brand': 'MAC', 'color': 'pink', 'age': 1}\n"
],
[
"# Length of a dictionary\nprint (len(dog))\n\nprint (len(MAC))\n\n\"\"\"Calculated length of MAC dictionary\"\"\"",
"3\n3\n"
]
],
[
[
"# If statements",
"_____no_output_____"
],
[
"You can use `if` statements to conditionally do something.",
"_____no_output_____"
]
],
[
[
"# If statement\nx = 4\nif x < 1:\n score = \"low\"\nelif x <= 4:\n score = \"medium\"\nelse:\n score = \"high\"\nprint (score)\n\nx = 5\nif x < 2:\n score = \"low\"\nelif x <= 5:\n score = \"medium\"\nelse:\n score = \"high\"\nprint (score)\n\nx = 10\nprint (score)\n\nx = 1\nprint (score)\n\n\"\"\"Added additional if statements (x = 5)\"\"\"",
"medium\nmedium\nmedium\nmedium\n"
],
[
"# If statment with a boolean\nx = True\nif x:\n print (\"it worked\")\n \ny = False\nif y: \n print (\"it did not work\") \n \nz = True\nif z:\n print (\"it almost worked\")\n \n \n\"\"\"Created true / false boolean statements\"\"\"",
"it worked\nit almost worked\n"
]
],
[
[
"# Loops",
"_____no_output_____"
],
[
"In Python, you can use `for` loop to iterate over the elements of a sequence such as a list or tuple, or use `while` loop to do something repeatedly as long as a condition holds.",
"_____no_output_____"
]
],
[
[
"# For loop\nx = 2 # x variable will start at 2 instead of 1\nfor i in range(5): # goes from i=0 to i=4 range is 5 instead of 3\n x += 1 # same as x = x + 1\n print (\"i={0}, x={1}\".format(i, x)) # printing with multiple variables(\n ",
"i=0, x=3\ni=1, x=4\ni=2, x=5\ni=3, x=6\ni=4, x=7\n"
],
[
"# Loop through items in a list\nx = 2 # changed x variable to 2, now x will start at 3 instead of 2\nfor i in [0, 1, 2, 3, 4]: # added two additional numbers to the list\n x += 1 # same as x = x +1\n print (\"i={0}, x={1}\".format(i, x))",
"i=0, x=3\ni=1, x=4\ni=2, x=5\ni=3, x=6\ni=4, x=7\n"
],
[
"# While loop\nx = 10 # Changed variable from 3 to 10\nwhile x > 3: # Changed the condition to 3\n x -= 1 # same as x = x - 1\n print (x)",
"9\n8\n7\n6\n5\n4\n3\n"
]
],
[
[
"# Functions",
"_____no_output_____"
],
[
"Functions are a way to modularize reusable pieces of code. ",
"_____no_output_____"
]
],
[
[
"# Create a function\ndef Shamel(x): # Redefined function's name \n x += 5 # Changed value of x\n return x\n\n# Use the function\nscore = 1\nscore = Shamel(x=score)\nprint (score)",
"6\n"
],
[
"# Function with multiple inputs\ndef join_name (first_name, middle_name, last_name): # Re-defined function\n joined_name = first_name + \" \" + middle_name + \" \" + last_name # Added middle name\n return joined_name\n\n# Use the function\nfirst_name = \"Sheri\" # Change display information \nmiddle_name = \"Nicole\"\nlast_name = \"Lamb\"\njoined_name = join_name(first_name=first_name, middle_name=middle_name, last_name=last_name)\nprint (joined_name)",
"Sheri Nicole Lamb\n"
]
],
[
[
"# Classes",
"_____no_output_____"
],
[
"Classes are a fundamental piece of object oriented programming in Python.",
"_____no_output_____"
]
],
[
[
"# Creating the class\nclass Cars(object): # Changed class to Cars\n \n # Initialize the class\n def __init__(self, brand, color, name): # Changed \"species\" to \"brand\"\n self.brand = brand\n self.color = color\n self.name = name\n\n # For printing \n def __str__(self):\n return \"{0} {1} named {2}.\".format(self.color, self.brand, self.name)\n\n # Example function\n def change_name(self, new_name):\n self.name = new_name",
"_____no_output_____"
],
[
"# Creating an instance of a class\nmy_car = Cars(brand=\"Jeep\", color=\"Spitfire Orange\", name=\"Rover\",) # Changed instances of car class\nprint (my_car)\nprint (my_car.name)",
"Spitfire Orange Jeep named Rover.\nRover\n"
],
[
"# Using a class's function\nmy_car.change_name(new_name=\"Sunshine\") # Changes cars name\nprint (my_car)\nprint (my_car.name)",
"Spitfire Orange Jeep named Sunshine.\nSunshine\n"
]
],
[
[
"# Additional resources",
"_____no_output_____"
],
[
"This was a very quick look at Python and we'll be learning more in future lessons. If you want to learn more right now before diving into machine learning, check out this free course: [Free Python Course](https://www.codecademy.com/learn/learn-python)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d0c3bdf820dcf98073413e23f21b2a7a27018e46 | 113,065 | ipynb | Jupyter Notebook | csc421_tzanetakis_probabilistic_reasonining_time.ipynb | oscarsandford/csc421_tzanetakis | a2f1ab741b7202ab9fb8d2e5bcab926a220a8c44 | [
"CC0-1.0"
] | 9 | 2021-09-10T02:12:42.000Z | 2022-03-30T00:35:47.000Z | csc421_tzanetakis_probabilistic_reasonining_time.ipynb | oscarsandford/csc421_tzanetakis | a2f1ab741b7202ab9fb8d2e5bcab926a220a8c44 | [
"CC0-1.0"
] | null | null | null | csc421_tzanetakis_probabilistic_reasonining_time.ipynb | oscarsandford/csc421_tzanetakis | a2f1ab741b7202ab9fb8d2e5bcab926a220a8c44 | [
"CC0-1.0"
] | 7 | 2021-09-19T13:21:04.000Z | 2022-03-03T08:55:34.000Z | 124.795806 | 6,896 | 0.848813 | [
[
[
"# CHAPTER 14 - Probabilistic Reasoning over Time \n\n### George Tzanetakis, University of Victoria \n",
"_____no_output_____"
],
[
"## WORKPLAN \n\nThe section number is based on the 4th edition of the AIMA textbook and is the suggested\nreading for this week. Each list entry provides just the additional sections. For example the Expected reading include the sections listed under Basic as well as the sections listed under Expected. Some additional readings are suggested for Advanced. \n\n1. Basic: Sections **14.1**, **14.3, and **Summary**\n2. Expected: Same as Basic plus 14.2 \n3. Advanced: All the chapter including bibligraphical and historical notes \n\n\n",
"_____no_output_____"
],
[
"## Time and Uncertainty \n\n\nAgents operate over time. They need to maintain a **belief state** (a set of variables (or random variables) indexed by time) that represents which states of the world are currently possible. From the **belief** state and a transition model, the agent can predict how the world might evolve in the next time step. From the percepts observed and a **sensor** model, the agent can update the **belief state**. \n\n* CSP: belief states are variables with domains \n* Logic: logical formulaes which belief states are possible \n* Probablities: probabilities which belief states are likely \n\n\n\n* **Transition model:** describe the probability distribution of the variables at time $t$ given the state of the world at past time \n* **Sensor model:** the probability of each percept at time $t$, given the current state of the world \n\n* Dynamic Bayesian Networks \n * Hidden Markov Models \n * Kalman Filters \n \n \n### States and Observations \n\n**Discret-time** models, the world is views as a series of **time slices** \n\nEach time slide contains a set of **random variables**, some observable and some not. \n\n*Example scenario:* you are the security guard stationed at a secret underground installation. \nYou want to know whether it is raining today, but your only access to the outside world \noccurs each morning when you see the director coming in with, or without an umbrella. \n\nFor each day $t$, the evidence set $E_t$ contains a single evidence variable $Umbrella_{t}$ or $U_t$. \nThe state set $S_t$ contains a single state variable $Rain_{t}$ or $R_t$. \n\n<img src=\"images/rain_umbrella_hmm.png\" width=\"75%\"/>\n\n\n\n### Transition and Sensor Models \n\n\n**TRANSITION MODEL** \n* General form: $P(X_t | X_{0:t-1})$\n\n**Markov Assumption**: Andrei Markov (1856-1922) the current state only depends on a fixed number of previous states \n\n* First-order markov process: $P(X_t | X_{0:t-1}) = P(X_t | X_{t-1})$\n\nTime homegeneous process: the conditional transition probabilities is the same for all time steps $t$. \n\n\nA Markov chain is a sequence of random variables\n$X_1, X_2, X_3, . . .$ with the Markov property, namely that the probability of moving to the next state depends only on the present state and not on the previous states:\n* $P(X_{n+1} = x|X_{1} = x_1,X_2 = x_2,...,X_n = x_n) = P(X_{n+1} = x|X_n = x_n)$\n\n\n<img src=\"images/markov.png\" width=\"30%\"/>\n\n\nThe possible values of $X_i$ form a countable set $S$ called the state space of the chain. A **Markov Chain** can be specified by a transition matrix with the probabilities of going from a particular state to another state at every time step.\n\n",
"_____no_output_____"
],
[
"## Sensor model/observations \n\nThere are many application areas, for example speech recognition, in which we are interesting in modeling probability distributions over sequences of observations. We will denote the observation at time $t$ by the variable $Y_t$. The variable can be a symbol from a discrete alphabet or a continuous variable and we assume that the observations are sampled at discrete equally-spaced time intervals so $t$ can be an integer-valued time index.",
"_____no_output_____"
],
[
"## Inference in Temporal Models \n\n* **Filtering:** we want to compute the posterior distribution over the current state, given all evidence to date. $P(X_t|e_{1:t})$. An almost identical calculation provides the likelihood of the evidence sequence $P(e_{1:T})$.\n* **Prediction:** we want to computer the posterior distribution over the future state, given all evidence to date. $P(Xt+k|e_{1:t})$ for some $k > 0$.\n\n* **Smoothing or hindsight:** computing the posterior distribution over a past state, given all evidence up to the present: $P(X_{t-k}|e_{1:t})$ for some $k < t$. It provides a better estimate of the state than what was available at the time, because it incorporates more evidence.\n\n* **Most likely explanation:** Given a sequence of observations, we might wish to find the sequence of states that is most likely to have generated these observations. That is we wish to compute: \n$argmax_{x_{1:t}} P(x_{1:t}|e_{1:t})$. This is the typical inference task in Speech Recognition using Hidden Markov Models.",
"_____no_output_____"
],
[
"### Sidenote: Speech Recognition",
"_____no_output_____"
],
[
"In phonology and linguistics, a phoneme is a unit of sound that can distinguish one word from another in a particular language. For example the english words **book** and **took** differ in one phoneme (the b vs t sound) \nand contain the same two remaining phonemes the **oo** sound and **k** sound. There is a clear correspondence between the written alphabet symbols of a word and the corresponding phonemes but in English there is a lot of confusing variation. For example the writtern symbols **oo** correspond to a different phoneme in the word **door**. In languages like Spanish or Greek there is a stronger direct correspondance between the written symbols and phonemes making it possible to \"read\" a Greek text without making phoneme errors even if you \ndon't know the underlying words something much harder to do in English. \n\nThe task of speech recognition is to take as input an audio recording a human talking and convert that recording to written words. It is possible to convert written words to sequences of phonemes and vice versa using a phonetic dictionary. For example check: http://www.speech.cs.cmu.edu/cgi-bin/cmudict\n\nThere are different symbolic representations for phonemes. For example the international phonetic alphabet is an alphabetic system of phonetic notation based primarily on the Latin script that tries to cover the sounds of all languages around the world. Interesting sidenote: all babies are born with the ability to recongize and also reproduce all phonemes but as they age in a particular linguistic environment their ability gets restricted/pruned to the phonemes of the particular languages they are exposed to. \n\nSo once we have the phonetic dictionary our task becomes to convert an audio recording of a human voice to a sequence of phonemes that can then be converted to written words using a phonetic dictionary. \n\nWithout going into details we form different phonemes by appropriately shaping our mouths and tongue and using our vocal folds to produce pitched and unpitched phonemes/sounds (vowels and consonants). It is possible to compute features such as **Mel-Frequency Cepstral Coefficients (MFCC)** using Digital Signal Processing techniques that characterizes these configurations over short intervals of time (typically 20-40 milliseconds). \n\nSo now, the task of automatic speech recognition becomes given a time sequence of feature vectors (computed from the audio recording) find the most likely sequence of phonemes that produced that sequence of feature vectors. \nPhonemes and especially vowels can have different durations so a particular word can be represented as a sequence of states corresponding to phonemes with repetitions. For example for the word **book** we might have the following sequence: $b,b,oo,oo,oo,oo,oo,oo,oo,oo,oo,oo,oo,k,k$ with informal state notation corresponding to the phonemes. Further complicating our task is the fact that depending on speakers and inflection there are many possible ways to render a particular phoneme. So we can also think of each phoneme as a distribution of feature vectors. \n\nSo let's look at some possible approaches to solve this problem in order of increasing complexity but \nalso improved accuracy: \n\n1. We can train a classifiers that given a feature vector predicts the corresponding phoneme. However this approach does not take into account that different phonemes have different probabilities (for example the phoneme correpsonding to the written symbol $z$ is less likely than the phoneme corresponding to the vowel $a$ as in the word apple), different phonemes have different typical durations (for example vowels tend to be longer than consonants), and certain transitions between phonemes for example $z$ followed by $b$ are very unlikely if not impossible whereas other ones are are much more common for example $r$ followed by $a$ as in the word apple). \n2. We can model the probabilities of diffeerent phonemes and their transitions as a first order Markove chain where the state is the phoneme and then the observation output of each state can be modelled as a continuous probability distribution over the **MFCCs** feature space. That way duration and transition information is taken into account when performing automatic speech recognition. \n\n\nAutomatic Speech Recognition Systems based on Hidden Markov Models (HMMs) dominated the field for about 20 years until they were superseded by deep learning models in the last decade or so. They are still widely used especially in situations with restricted computational resources where deep learning systems are not practical. ",
"_____no_output_____"
],
[
"## Hidden Markov Models \n\n\nProperties:\n\n* The observation at time $t$ is generated by some random process whose state $S_t$ is hidden from the observer.\n* The hidden states form a **Markov Chain** i.e given the value of $S_{t−1}$, the current state $S_t$ is independent of all states prior to $t − 1$. The outputs also satisfy a Markov property which is that given state $S_t$, the observation $Y_t$ is independent of all previous states and observations.\n* The hidden state variable $S_t$ is discrete\n",
"_____no_output_____"
],
[
"We can write the joint distribution of a sequence of states and observations by using the Markov assumptions to factorize:\n\n\n* $ P(S_{1:T},Y_{1:T}) = P(S_1)P(Y_1|S_1) \\prod_{t=2}^{T}P(St|S_{t−1})P(Yt|St)$\n\n\nwhere the notation $X_{1:T}$ indicates thesequence $X_1,X_2,...,X_T$.",
"_____no_output_____"
],
[
"We can view the Hiddean Markov Model graphically as a Bayesian network by unrolling over time - think of the HMM as a template for generating a Bayesian Network and the corresponding CPTs over time. In fact, it is possible \nto perform the temporal inference tasks using exact or approximate inference of the corresponding Bayesian network but for **HMMs** there are significantly more efficient algorithms. \n\n<img src=\"images/hmm2bayesnet.png\" width=\"50%\"/>\n",
"_____no_output_____"
],
[
"### Specifying an HMM\n\n\nSo all we need to do to specify an HMM are the following components:\n \n* A probability distribution over the intial state $P(S_1)$\n* The $K$ by $K$ state transition matrix $P(St|St−1)$, where $K$ is the number of states\n* The $K$ by $L$ emission matrix $P(Yt|St)$ if $Y_t$ is discrete and has $L$ values, or the parameters $θ_t$ of some form of continuous probability density function if $Yt$ is continuous.",
"_____no_output_____"
],
[
"### Learning the transition and sensor models\n\nIn addition to these tasks, we need methods for learning the transition and sensor models from observations. The basic idea is that inference provides an estimate of what transitions actually occurred and what states generated the observations. These estimates can then be used to update the models and the process can be repeated. This is an instance of the expectation-maximization (EM) algorithm. We will talk about learning probabilistic models in Chapter 20 Learning Probabilistic Models. ",
"_____no_output_____"
],
[
"### Sketch of filtering and prediction (Forward)\n\nWe perform recursive estimation. First the current state distribution is projected forward from $t$ to $t + 1$. Then it is updated using the new evidence $e_{t+1}$. We will not cover the details but it can be done by recursive application of Bayes rule and the Markov property of evidence and the sum/product rules.\nWe can think of the filtered estimate $P(X_t|e_{1:t})$ as a “message” that is propagated forward along the sequence, modified by each transition, and updated by each new observation.",
"_____no_output_____"
],
[
"### Sketch of smoothing (Backward)\n\nThere are two parts to computing the distribution over past states given evidence up to the present. The first is the evidence up to $k$, and then the evidence from $k + 1$ to $t$. The forward message can be computed as by filtering from $1$ to $k$. Using conditional independence and the sum and product rules we can form a backward message that runs backwards from $t$. It is possible to combine both steps in one pass to smooth the entire sequence. This is, not surprisingly, called the **Foward-Backward** algorithm.",
"_____no_output_____"
],
[
"### Finding the most likely sequence\n\nView each sequence of states as a path through a graph whose nodes are the possible states at each time step. The task is to find the most likely path through this graph, where the likelihood of any path is the product of the transition probabilities along the path and the probabilities of the given observations at each state. Because of the **Markov** property there is a recursive relationshtip between the most likely paths to each state $x_{t+1}$ and most likely paths to each state $x_t$. By running forward along the sequence, and computing m messages at each time step we will have the probaiblity for the most likely sequence reaching each of the final states. Then we simply select the most likely one. This is called the **Vitterbi** algorithm.",
"_____no_output_____"
],
[
"### Markov Chains and Hidden Markov Models Example \n\nWe start with random variables and a simple independent, identically distributed model for weather. Then we look into how to form a Markov Chain to transition between states and finally we sample a Hidden Markov Model to show how the samples are generated based on the Markov Chain of the hidden states. The results are visualized as strips of colored rectangles. Experiments with the transition probabilities and the emission probabilities can lead to better understanding of how Hidden Markov Models work in terms of generating data.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline \nimport matplotlib.pyplot as plt\nfrom scipy import stats\nimport numpy as np\nfrom hmmlearn import hmm\n\n\n\nclass Random_Variable: \n \n def __init__(self, name, values, probability_distribution): \n self.name = name \n self.values = values \n self.probability_distribution = probability_distribution \n if all(type(item) is np.int64 for item in values): \n self.type = 'numeric'\n self.rv = stats.rv_discrete(name = name, values = (values, probability_distribution))\n elif all(type(item) is str for item in values): \n self.type = 'symbolic'\n self.rv = stats.rv_discrete(name = name, values = (np.arange(len(values)), probability_distribution))\n self.symbolic_values = values \n else: \n self.type = 'undefined'\n \n def sample(self,size): \n if (self.type =='numeric'): \n return self.rv.rvs(size=size)\n elif (self.type == 'symbolic'): \n numeric_samples = self.rv.rvs(size=size)\n mapped_samples = [self.values[x] for x in numeric_samples]\n return mapped_samples \n \n def probs(self): \n return self.probability_distribution\n \n def vals(self): \n print(self.type)\n return self.values \n \n ",
"_____no_output_____"
]
],
[
[
"### Generating random weather samples with a IID model with no time dependencies\n\n\nLet's first create some random samples of a symbolic random variable corresponding to the weather with two values Sunny (S) and cloudy (C) and generate random weather for 365 days. The assumption in this model is that the weather of each day is indepedent of the previous days and drawn from the same probability distribution.",
"_____no_output_____"
]
],
[
[
"values = ['S', 'C']\nprobabilities = [0.9, 0.1]\nweather = Random_Variable('weather', values, probabilities)\nsamples = weather.sample(365)\nprint(\",\".join(samples))",
"S,C,S,S,S,C,S,C,S,S,C,C,S,C,C,C,S,C,C,S,C,C,C,S,S,S,C,S,S,S,S,S,S,C,C,S,S,S,C,C,S,S,S,S,S,S,C,C,C,S,S,S,C,C,S,C,C,S,C,S,C,S,C,S,S,S,C,S,S,S,C,C,C,C,S,C,C,C,C,C,C,C,C,C,S,S,C,C,C,S,S,S,S,C,C,C,C,S,S,S,C,S,C,S,S,C,S,S,S,C,S,C,C,S,C,S,C,C,C,S,C,C,C,S,C,C,S,C,C,S,C,S,C,C,S,S,S,C,S,S,C,S,C,S,C,C,S,S,S,S,S,C,S,S,C,S,C,C,C,C,C,S,C,S,S,C,S,C,S,C,S,S,S,C,C,C,S,S,S,C,C,C,C,S,C,S,S,S,C,S,S,C,S,S,C,C,S,C,S,S,S,C,C,S,C,S,S,C,S,C,S,S,C,C,C,C,C,S,S,S,C,S,S,C,C,C,S,C,S,S,C,C,S,C,S,S,C,S,C,S,S,S,C,C,S,S,C,S,C,C,C,S,S,S,C,C,S,C,S,C,C,C,C,S,C,C,C,S,C,S,C,C,C,S,C,S,S,C,C,C,S,C,S,S,C,C,S,C,C,S,C,C,S,S,S,S,S,S,S,C,S,S,C,C,S,S,S,S,C,C,C,C,S,C,S,S,C,S,S,S,C,C,S,S,S,C,S,C,S,C,S,S,S,C,C,C,S,S,S,S,C,C,C,S,S,C,S,S,S,S,C,C,C,S,S,S,S,C,S,C,S,S,S,S,S\n"
]
],
[
[
"Now let lets visualize these samples using yellow for sunny and grey for cloudy ",
"_____no_output_____"
]
],
[
[
"state2color = {} \nstate2color['S'] = 'yellow'\nstate2color['C'] = 'grey'\n\ndef plot_weather_samples(samples, state2color, title): \n colors = [state2color[x] for x in samples]\n x = np.arange(0, len(colors))\n y = np.ones(len(colors))\n plt.figure(figsize=(10,1))\n plt.bar(x, y, color=colors, width=1)\n plt.title(title)\n \nplot_weather_samples(samples, state2color, 'iid')\n",
"_____no_output_____"
]
],
[
[
"### Markov Chain\n\nNow instead of independently sampling the weather random variable lets form a markov chain. The Markov chain will start at a particular state and then will either stay in the same state or transition to a different state based on a transition probability matrix. To accomplish that we basically create a random variable for each row of the transition matrix that basically corresponds to the probabilities of the transitions emanating fromt the state corresponding to that row. Then we can use the markov chain to generate sequences of samples and contrast these sequence with the iid weather model. By adjusting the transition probabilities you can in a probabilistic way control the different lengths of \"stretches\" of the same state.\n",
"_____no_output_____"
]
],
[
[
"def markov_chain(transmat, state, state_names, samples): \n (rows, cols) = transmat.shape \n rvs = [] \n values = list(np.arange(0,rows))\n \n # create random variables for each row of transition matrix \n for r in range(rows): \n rv = Random_Variable(\"row\" + str(r), values, transmat[r])\n rvs.append(rv)\n \n # start from initial state and then sample the appropriate \n # random variable based on the state following the transitions \n states = [] \n for n in range(samples): \n state = rvs[state].sample(1)[0] \n states.append(state_names[state])\n return states\n\n\n# transition matrices for the Markov Chain \ntransmat1 = np.array([[0.7, 0.3], \n [0.2, 0.8]])\n\ntransmat2 = np.array([[0.9, 0.1], \n [0.1, 0.9]])\n\ntransmat3 = np.array([[0.5, 0.5], \n [0.5, 0.5]])\n\nstate2color = {} \nstate2color['S'] = 'yellow'\nstate2color['C'] = 'grey'\n\n# plot the iid model too\nsamples = weather.sample(365)\nplot_weather_samples(samples, state2color, 'iid')\n\nsamples1 = markov_chain(transmat1,0,['S','C'], 365)\nplot_weather_samples(samples1, state2color, 'markov chain 1')\n\nsamples2 = markov_chain(transmat2,0,['S','C'],365)\nplot_weather_samples(samples2, state2color, 'marov_chain 2')\n\nsamples3 = markov_chain(transmat3,0,['S','C'], 365)\nplot_weather_samples(samples3, state2color, 'markov_chain 3')\n\n",
"_____no_output_____"
]
],
[
[
"### Generating samples using a Hidden Markov Model \n\nLets now look at how a Hidden Markov Model would work by having a Markov Chain to generate \na sequence of states and for each state having a different emission probability. When sunny we will output red or yellow with higher probabilities and when cloudy black or blue. First we will write the code directly and then we will use the hmmlearn package. \n",
"_____no_output_____"
]
],
[
[
"state2color = {} \nstate2color['S'] = 'yellow'\nstate2color['C'] = 'grey'\n\n# generate random samples for a year \nsamples = weather.sample(365)\nstates = markov_chain(transmat1,0,['S','C'], 365)\nplot_weather_samples(states, state2color, \"markov chain 1\")\n\n# create two random variables one of the sunny state and one for the cloudy \nsunny_colors = Random_Variable('sunny_colors', ['y', 'r', 'b', 'g'], \n [0.6, 0.3, 0.1, 0.0])\ncloudy_colors = Random_Variable('cloudy_colors', ['y', 'r', 'b', 'g'], \n [0.0, 0.1, 0.4, 0.5])\n\ndef emit_obs(state, sunny_colors, cloudy_colors): \n if (state == 'S'): \n obs = sunny_colors.sample(1)[0]\n else: \n obs = cloudy_colors.sample(1)[0]\n return obs \n\n# iterate over the sequence of states and emit color based on the emission probabilities \nobs = [emit_obs(s, sunny_colors, cloudy_colors) for s in states]\n\nobs2color = {} \nobs2color['y'] = 'yellow'\nobs2color['r'] = 'red'\nobs2color['b'] = 'blue'\nobs2color['g'] = 'grey'\nplot_weather_samples(obs, obs2color, \"Observed sky color\")\n\n# let's zoom in a month \nplot_weather_samples(states[0:30], state2color, 'states for a month')\nplot_weather_samples(obs[0:30], obs2color, 'observations for a month')\n",
"_____no_output_____"
]
],
[
[
"### Multinomial HMM \n\nLets do the same generation process using the multinomail HMM model supported by the *hmmlearn* python package. \n",
"_____no_output_____"
]
],
[
[
"transmat = np.array([[0.7, 0.3], \n [0.2, 0.8]])\n\nstart_prob = np.array([1.0, 0.0])\n\n# yellow and red have high probs for sunny \n# blue and grey have high probs for cloudy \nemission_probs = np.array([[0.6, 0.3, 0.1, 0.0], \n [0.0, 0.1, 0.4, 0.5]])\n\nmodel = hmm.MultinomialHMM(n_components=2)\nmodel.startprob_ = start_prob \nmodel.transmat_ = transmat \nmodel.emissionprob_ = emission_probs\n\n# sample the model - X is the observed values \n# and Z is the \"hidden\" states \nX, Z = model.sample(365)\n\n# we have to re-define state2color and obj2color as the hmm-learn \n# package just outputs numbers for the states \nstate2color = {} \nstate2color[0] = 'yellow'\nstate2color[1] = 'grey'\nplot_weather_samples(Z, state2color, 'states')\n\nsamples = [item for sublist in X for item in sublist]\nobj2color = {} \nobj2color[0] = 'yellow'\nobj2color[1] = 'red'\nobj2color[2] = 'blue'\nobj2color[3] = 'grey'\nplot_weather_samples(samples, obj2color, 'observations')",
"_____no_output_____"
]
],
[
[
"### Estimating the parameters of an HMM\n\nLet's sample the generative HMM and get a sequence of 1000 observations. Now we can learn in an unsupervised way the paraemters of a two component multinomial HMM just using these observations. Then we can compare the learned parameters with the original parameters of the model used to generate the observations. Notice that the order of the components is different between the original and estimated models. Notice that hmmlearn does NOT directly support supervised training where you have both the labels and observations. It is possible to initialize a HMM model with some of the parameters and learn the others. For example you can initialize the transition matrix and learn the emission probabilities. That way you could implement supervised learning for a multinomial HMM. In many practical applications the hidden labels are not available and that's the hard case that is actually implemented in hmmlearn.\n\nThe following two cells take a few minutes to compute on a typical laptop.",
"_____no_output_____"
]
],
[
[
"# generate the samples \nX, Z = model.sample(10000)\n# learn a new model \nestimated_model = hmm.MultinomialHMM(n_components=2, n_iter=10000).fit(X)\n",
"_____no_output_____"
]
],
[
[
"Let's compare the estimated model parameters with the original model. \n",
"_____no_output_____"
]
],
[
[
"print(\"Transition matrix\")\nprint(\"Estimated model:\")\nprint(estimated_model.transmat_)\nprint(\"Original model:\")\nprint(model.transmat_)\nprint(\"Emission probabilities\")\nprint(\"Estimated model\")\nprint(estimated_model.emissionprob_)\nprint(\"Original model\")\nprint(model.emissionprob_)",
"Transition matrix\nEstimated model:\n[[0.51221102 0.48778898]\n [0.50955673 0.49044327]]\nOriginal model:\n[[0.7 0.3]\n [0.2 0.8]]\nEmission probabilities\nEstimated model\n[[0.18583389 0.21418596 0.33339236 0.26658778]\n [0.29659401 0.12915002 0.22585028 0.34840569]]\nOriginal model\n[[0.6 0.3 0.1 0. ]\n [0. 0.1 0.4 0.5]]\n"
]
],
[
[
"### Predicting a sequence of states given a sequence of observations\n\nWe can also use the trained HMM model to predict a sequence of hidden states given a sequence of observations. This is the task of maximum likelihood sequence estimation. For example in Speech Recognition it would correspond to estimating a sequence of phonemes (hidden states) from a sequence of observations (acoustic vectors). \n\nThis cell also takes a few minutes to compute. Note that whether the predicted or flipped predicted states correspond to the original depends on which state is selected as state0 and state1. So sometimes when you run the notebook the predicted states will be the right color some times the flipped states will be the right ones. ",
"_____no_output_____"
]
],
[
[
"Z2 = estimated_model.predict(X)\nstate2color = {} \nstate2color[0] = 'yellow'\nstate2color[1] = 'grey'\nplot_weather_samples(Z, state2color, 'Original states')\nplot_weather_samples(Z2, state2color, 'Predicted states')\n\n# note the reversal of colors for the states as the order of components is not the same. \n# we can easily fix this by change the state2color \nstate2color = {} \nstate2color[1] = 'yellow'\nstate2color[0] = 'grey'\nplot_weather_samples(Z2, state2color, 'Flipped Predicted states')\n\n\n",
"_____no_output_____"
]
],
[
[
"The estimated model can be sampled just like the original model ",
"_____no_output_____"
]
],
[
[
"X, Z = estimated_model.sample(365)\n\nstate2color = {} \nstate2color[0] = 'yellow'\nstate2color[1] = 'grey'\nplot_weather_samples(Z, state2color, 'states generated by estimated model ')\n\nsamples = [item for sublist in X for item in sublist]\nobs2color = {} \nobs2color[0] = 'yellow'\nobs2color[1] = 'red'\nobs2color[2] = 'blue'\nobs2color[3] = 'grey'\nplot_weather_samples(samples, obs2color, 'observations generated by estimated model')\n\n",
"_____no_output_____"
]
],
[
[
"### An example of filtering \n\n<img src=\"images/rain_umbrella_hmm.png\" width=\"75%\"/>\n\n\n* Day 0: no observations $P(R_0) = <0.5, 0.5>$\n* Day 1: let's say umbrella appears, $U_{1} = true$. \n * The prediction step from $t=0$ to $t=1$ is \n $P(R_1) = \\sum_{r_0} P(R_1 | r_0) P(r_0) = \\langle 0.7, 0.3 \\rangle \\times 0.5 + \\langle 0.3, 0.7 \\rangle \\times 0.5 = \\langle 0.5, 0.5\\rangle $ \n * The update step simply multiplies the probability of the evidence for $t=1$ and normalizes: \n $P(R_1|u1) = \\alpha P(u_{1} | R_{1}) P(R_1) = \\alpha \\langle 0.9, 0.2 \\rangle \\times \\langle 0.5, 0.5 \\rangle = \\alpha \\langle 0.45, 0.1 \\rangle \\approx \\langle 0.818, 0.182 \\rangle $\n* Day 2: let's say umbrella appears, $U_{2} = true$. \n * Prediction step from $t=1$ to $t=2$ is $P(R_1 | u1) = \\alpha P(u_1 | R_1) P(R_1) = \\langle 0.7, 0.3 \\rangle \\times 0.818 + \\langle 0.3 0.7 \\rangle \\times 0.182 \\approx \\langle 0.627, 0.373 \\rangle $\n * Updating with evidence for t=2 gives: $P(R_2 | u_1, u_2) = \\alpha P(u_2/R_2)P(R2|u_1)= \\alpha \\langle 0.9, 0.2 \\rangle \\times \\langle 0.627, 0.373 \\rangle = \\alpha \\langle 0.565, 0.0075 \\rangle \\approx \\langle 0.883, 0.117 \\rangle $\n \nIntuitively, the probability of rain increases from day 1 to day 2 becaus ethe rain persists. \n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0c3d7ffa19402bd62da327418a1316ddfe04b7c | 385,880 | ipynb | Jupyter Notebook | LSN_exercise_10/Ex_10.ipynb | LeonardoAlchieri/LSN_exercises | 72166f8f7ed08969b3556423291f61ba58aa5627 | [
"MIT"
] | 1 | 2020-01-08T10:57:08.000Z | 2020-01-08T10:57:08.000Z | LSN_exercise_10/Ex_10.ipynb | LeonardoAlchieri/LSN_exercises | 72166f8f7ed08969b3556423291f61ba58aa5627 | [
"MIT"
] | null | null | null | LSN_exercise_10/Ex_10.ipynb | LeonardoAlchieri/LSN_exercises | 72166f8f7ed08969b3556423291f61ba58aa5627 | [
"MIT"
] | null | null | null | 761.104536 | 65,544 | 0.955364 | [
[
[
"# <span style=\"color:green\"> Numerical Simulation Laboratory (NSL) </span>\n## <span style=\"color:blue\"> Numerical exercises 10</span>",
"_____no_output_____"
],
[
"### Exercise 10.1\n\nBy adapting your Genetic Algorithm code, developed during the Numerical Exercise 9, write a C++ code to solve the TSP with a **Simulated Annealing** (SA) algorithm. Apply your code to the optimization of a path among \n\n- 30 cities randomly placed on a circumference\n\nShow your results via:\n\n- a picture of the length of the best path as a function of the iteration of your algorithm\n- a picture of the best path",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom os import system\nfrom matplotlib.pyplot import figure\nfrom time import time",
"_____no_output_____"
],
[
"start = time()",
"_____no_output_____"
]
],
[
[
"# CIRCUMFERENCE",
"_____no_output_____"
]
],
[
[
"form = 0",
"_____no_output_____"
],
[
"system('sh clean.sh')\nsystem('./genetic.exe '+str(form))",
"_____no_output_____"
],
[
"results = np.loadtxt(\"results.txt\", skiprows=1)\ntemps = results[:,0]\npaths = results[:,1]",
"_____no_output_____"
],
[
"figure(figsize=(10,7), dpi=70)\nplt.plot(temps, paths)\nplt.axhline(2*np.pi, color='red', label='2π')\nplt.xlabel(\"Temperature\")\nplt.ylabel(\"Path lengths\")\nplt.legend(loc='best')\nplt.grid(True)\nplt.show()",
"_____no_output_____"
],
[
"print('-- Best path reached:', paths[-1])",
"-- Best path reached: 6.23829\n"
],
[
"positions = np.loadtxt(\"best_conf.txt\", skiprows=1)\nx = positions[:,0]\ny = positions[:,1]",
"_____no_output_____"
],
[
"figure(figsize=(10,10), dpi=70)\nplt.plot(x, y, marker=\"o\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.show()",
"_____no_output_____"
]
],
[
[
"# SQUARE",
"_____no_output_____"
]
],
[
[
"form = 1",
"_____no_output_____"
],
[
"system('sh clean.sh')\nsystem('./genetic.exe '+str(form))",
"_____no_output_____"
],
[
"results = np.loadtxt(\"results.txt\", skiprows=1)\ntemps = results[:,0]\npaths = results[:,1]",
"_____no_output_____"
],
[
"figure(figsize=(10,7), dpi=70)\nplt.plot(temps, paths)\nplt.xlabel(\"Temperature\")\nplt.ylabel(\"Path lengths\")\nplt.grid(True)\nplt.show()",
"_____no_output_____"
],
[
"print('-- Best path reached:', paths[-1])",
"-- Best path reached: 4.619\n"
],
[
"positions = np.loadtxt(\"best_conf.txt\", skiprows=1)\nx = positions[:,0]\ny = positions[:,1]",
"_____no_output_____"
],
[
"figure(figsize=(10,10), dpi=70)\nplt.plot(x, y, marker=\"o\")\nplt.xlabel('x')\nplt.ylabel('y')\nplt.show()",
"_____no_output_____"
],
[
"end = time()\nprint(\"-- Time for computation: \", int((end-start)*100.)/100., 'sec')",
"-- Time for computation: 6.15 sec\n"
]
],
[
[
"### Exercise 10.2\n\nParallelize with MPI libraries your Simulated Annealing code in order to solve the TSP by performing a *Random Search* with **parallel SA searches of the optimal path**:\neach node should perform an independent SA search and only in the end you will compare the results of each node.\nApply your code to the *usual* TSP problems above.",
"_____no_output_____"
],
[
"_I run only on the square, bacause the circumference is too easy and every node gets the same result._",
"_____no_output_____"
],
[
"_This part of the exercise has been run separately using MPI. I didn't dare to attempt to use ```os.system``` with it._",
"_____no_output_____"
]
],
[
[
"best_conf = np.zeros((4, 31, 2))\nresults = np.zeros((4, 1000, 2))\nfor rank in range(0,4):\n init = np.loadtxt(\"Parallel/best_conf\"+str(rank)+\".txt\", skiprows=1)\n init_2 = np.loadtxt(\"Parallel/results\"+str(rank)+\".txt\", skiprows=1)\n best_conf[rank, :, :] = init[:,:]\n results[rank,:,:] = init_2[:,:]",
"_____no_output_____"
],
[
"for rank in range(0,4):\n figure(figsize=(10,5), dpi=100)\n \n plt.subplot(1,2,1)\n plt.plot(best_conf[rank,:,0], best_conf[rank, :, 1], marker=\"o\")\n plt.title(\"Path rank: \"+str(rank))\n plt.xlabel(\"x\")\n plt.ylabel(\"y\")\n \n plt.subplot(1,2,2)\n plt.plot(results[rank,:,0], results[rank,:,1])\n plt.title(\"Path length to temperature, rank: \"+str(rank))\n plt.xlabel(\"Temperature\")\n plt.ylabel(\"Path length\")\n plt.show()\n print(\"BEST: \", results[rank,-1,1], \"------------ \\n \\n\")",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
d0c3f298cba25b2cf40277f0d257a2a42b364183 | 18,336 | ipynb | Jupyter Notebook | tools_examples/OpEn/examples_rust/OpEn_Rust_examples_general_diff.ipynb | inmo-jang/optimisation | 601af985d4a83f075dd8bd8760b16784515231b8 | [
"MIT"
] | null | null | null | tools_examples/OpEn/examples_rust/OpEn_Rust_examples_general_diff.ipynb | inmo-jang/optimisation | 601af985d4a83f075dd8bd8760b16784515231b8 | [
"MIT"
] | null | null | null | tools_examples/OpEn/examples_rust/OpEn_Rust_examples_general_diff.ipynb | inmo-jang/optimisation | 601af985d4a83f075dd8bd8760b16784515231b8 | [
"MIT"
] | 1 | 2020-04-27T23:28:18.000Z | 2020-04-27T23:28:18.000Z | 31.077966 | 561 | 0.45337 | [
[
[
"# OpEn Rust Examples: with General Gradient Function\n\nIn this example, we are going to use a function that can obtain the gradient of any given function. This sort of function was used in relaxed_ik rust version. Now, we are trying to use this approach for [the previous example that we implemented before](https://github.com/inmo-jang/optimisation_tutorial/blob/master/tools_examples/OpEn/examples_rust/OpEn_Rust_examples_obs_avoidance_simplified.ipynb).\n\n\n",
"_____no_output_____"
],
[
"## Problem Formulation (Remind from the previous example)\n\nMinimise $$f(\\mathbf{u}) = u_2$$ \n<div style=\"text-align: right\"> (P1) </div>\n\nsubject to \n\n$$ \\psi_{O}(\\mathbf{x}) = [1 - (\\mathbf{u} - \\mathbf{c})^{\\top}(\\mathbf{u} - \\mathbf{c})]_{+} = 0$$\n<div style=\"text-align: right\"> (P1C) </div> \n\n$$ u_2 = p_1 \\cdot (u_1 - p_2)^2 + p_3$$\n<div style=\"text-align: right\"> (P2C) </div> \n",
"_____no_output_____"
],
[
"## OpEn Implementation\n\nFirst, we need to import \"optimization_engine\". Also, we use \"nalgebra\" for linear algebra calculation. \n- In your local PC, it should be also declared in \"Cargo.toml\".\n- Instead, in this jupyter notebook, we need to have \"extern crate\" as follows. \n",
"_____no_output_____"
]
],
[
[
"extern crate optimization_engine;\nuse optimization_engine::{\n alm::*,\n constraints::*, panoc::*, *\n};\nextern crate nalgebra;\nuse nalgebra::base::{*};\n// use nalgebra::base::{Matrix4, Matrix4x2, Matrix4x1};\n// use std::cmp;",
"_____no_output_____"
]
],
[
[
"#### Problem Master Class\n\nYou should note that `AlmFactory` should have `f` and `df` being with `\nfn f(u: &[f64], cost: &mut f64) -> Result<(), SolverError>` and `fn df(u: &[f64], grad: &mut [f64]) -> Result<(), SolverError>`, respectively. It means that it could be nicer if we have a master class that can simply turn out `f` or `df` values. Such an architecture is used in [`relaxed_ik` rust version](https://github.com/uwgraphics/relaxed_ik/blob/dev/src/RelaxedIK_Rust/src/bin/lib/groove/objective_master.rs), which is a good example to be worth having a look. In this example as well, we are going to implement a problem master class as follows. \n",
"_____no_output_____"
]
],
[
[
"pub struct ProblemMaster{\n p_obs: Matrix2x1<f64>, // Obstacle Position\n p: Vec<f64> // parameters (slice)\n}\n\nimpl ProblemMaster{\n pub fn init(_p: Vec<f64>, _p_obs: Matrix2x1<f64>) -> Self {\n let p = _p;\n let p_obs = _p_obs;\n Self{p, p_obs} \n }\n \n // Cost function\n pub fn f_call(&self, u: &[f64]) -> f64{\n let cost = u[1];\n cost\n }\n \n pub fn f(&self, u: &[f64], cost: &mut f64){\n *cost = self.f_call(u); \n }\n \n // Gradient of the cost function\n pub fn df(&self, u: &[f64], grad: &mut [f64]){\n let mut f_0 = self.f_call(u);\n \n for i in 0..u.len() {\n let mut u_h = u.to_vec();\n u_h[i] += 0.000001;\n let f_h = self.f_call(u_h.as_slice());\n grad[i] = (-f_0 + f_h) / 0.000001;\n }\n \n } \n \n // F1 Constraint\n pub fn f1_call(&self, u: &[f64])-> Vec<f64> {\n let mut f1u = vec![0.0; u.len()]; \n f1u[0] = (1.0 - (u[0]-self.p_obs[(0,0)]).powi(2) - (u[1]-self.p_obs[(1,0)]).powi(2) ).max(0.0);\n f1u[1] = self.p[0]*(u[0] - self.p[1]).powi(2) + self.p[2] - u[1];\n return f1u;\n }\n \n pub fn f1(&self, u: &[f64], f1u: &mut [f64]){\n let mut f1u_vec = self.f1_call(u); \n for i in 0..f1u_vec.len(){\n f1u[i] = f1u_vec[i];\n }\n } \n \n // Jacobian of F1\n pub fn jf1_call(&self, u: &[f64])-> Matrix2<f64> {\n let mut jf1 = Matrix2::new(0.0, 0.0,\n 0.0, 0.0);\n \n let mut f1_0 = self.f1_call(u);\n\n for i in 0..f1_0.len(){\n for j in 0..u.len() {\n let mut u_h = u.to_vec();\n u_h[j] += 0.000001;\n let f_h = self.f1_call(u_h.as_slice());\n jf1[(i,j)] = (-f1_0[i] + f_h[i]) / 0.000001;\n } \n }\n\n return jf1; \n } \n \n // Jacobian Product (JF_1^{\\top}*d)\n pub fn f1_jacobian_product(&self, u: &[f64], d: &[f64], res: &mut [f64]){\n let test = self.f1_call(u);\n \n let mut jf1_matrix = self.jf1_call(u);\n if test[0] < 0.0{ // Outside the obstacle\n jf1_matrix[(0,0)] = 0.0;\n jf1_matrix[(0,1)] = 0.0; \n } \n \n let mut d_matrix = Matrix2x1::new(0.0, 0.0);\n for i in 0..d.len(){\n d_matrix[(i,0)] = d[i];\n }\n \n let mut res_matrix = jf1_matrix.transpose()*d_matrix;\n \n res[0] = res_matrix[(0,0)];\n res[1] = res_matrix[(1,0)]; \n }\n}",
"_____no_output_____"
]
],
[
[
"#### Main function\n",
"_____no_output_____"
]
],
[
[
"fn main(_p: &[f64], _centre: &[f64]) {\n /// ===========================================\n let mut p_obs = Matrix2x1::new(0.0, 0.0);\n for i in 0.._centre.len(){\n p_obs[(i,0)] = _centre[i];\n }\n \n let mut p: Vec<f64> = Vec::new(); \n for i in 0.._p.len(){\n p.push(_p[i]);\n }\n \n let mut pm = ProblemMaster::init(p, p_obs);\n \n /// ===========================================\n \n let tolerance = 1e-5;\n let nx = 2; // problem_size: dimension of the decision variables\n let n1 = 2; // range dimensions of mappings F1\n let n2 = 0; // range dimensions of mappings F2\n let lbfgs_mem = 5; // memory of the LBFGS buffer\n \n // PANOCCache: All the information needed at every step of the algorithm\n let panoc_cache = PANOCCache::new(nx, tolerance, lbfgs_mem);\n \n // AlmCache: A cache structure that contains all the data \n // that make up the state of the ALM/PM algorithm\n // (i.e., all those data that the algorithm updates)\n let mut alm_cache = AlmCache::new(panoc_cache, n1, n2);\n\n let set_c = Zero::new(); // Set C\n let bounds = Ball2::new(None, 100.0); // Set U\n let set_y = Ball2::new(None, 1e12); // Set Y\n\n // ============= \n // Re-define the functions linked to user parameters\n let f = |u: &[f64], cost: &mut f64| -> Result<(), SolverError> {\n pm.f(u, cost);\n Ok(())\n };\n \n let df = |u: &[f64], grad: &mut [f64]| -> Result<(), SolverError> {\n pm.df(u, grad);\n Ok(())\n };\n \n let f1 = |u: &[f64], f1u: &mut [f64]| -> Result<(), SolverError> {\n pm.f1(u, f1u);\n Ok(())\n }; \n \n let f1_jacobian_product = |u: &[f64], d: &[f64], res: &mut [f64]| -> Result<(), SolverError> {\n pm.f1_jacobian_product(u,d,res);\n Ok(())\n }; \n // ==============\n \n // AlmFactory: Prepare function psi and its gradient \n // given the problem data such as f, del_f and \n // optionally F_1, JF_1, C, F_2\n let factory = AlmFactory::new(\n f, // Cost function\n df, // Cost Gradient\n Some(f1), // MappingF1\n Some(f1_jacobian_product), // Jacobian Mapping F1 Trans\n NO_MAPPING, // MappingF2\n NO_JACOBIAN_MAPPING, // Jacobian Mapping F2 Trans\n Some(set_c), // Constraint set\n n2,\n );\n\n // Define an optimisation problem \n // to be solved with AlmOptimizer\n let alm_problem = AlmProblem::new(\n bounds,\n Some(set_c),\n Some(set_y),\n |u: &[f64], xi: &[f64], cost: &mut f64| -> Result<(), SolverError> {\n factory.psi(u, xi, cost)\n },\n |u: &[f64], xi: &[f64], grad: &mut [f64]| -> Result<(), SolverError> {\n factory.d_psi(u, xi, grad)\n },\n Some(f1),\n NO_MAPPING,\n n1,\n n2,\n );\n\n let mut alm_optimizer = AlmOptimizer::new(&mut alm_cache, alm_problem)\n .with_delta_tolerance(1e-5)\n .with_max_outer_iterations(200)\n .with_epsilon_tolerance(1e-6)\n .with_initial_inner_tolerance(1e-2)\n .with_inner_tolerance_update_factor(0.5)\n .with_initial_penalty(100.0)\n .with_penalty_update_factor(1.05)\n .with_sufficient_decrease_coefficient(0.2)\n .with_initial_lagrange_multipliers(&vec![5.0; n1]);\n\n let mut u = vec![0.0; nx]; // Initial guess\n let solver_result = alm_optimizer.solve(&mut u);\n let r = solver_result.unwrap();\n println!(\"\\n\\nSolver result : {:#.7?}\\n\", r);\n println!(\"Solution u = {:#.6?}\", u);\n}",
"_____no_output_____"
]
],
[
[
"#### Result\n\n##### Case (1) : $\\mathbf{p} = [0.5, 0.0, 0.0]$; and $\\mathbf{c} = [0.0, 0.0]$\n",
"_____no_output_____"
]
],
[
[
"main(&[0.5, 0.0, 0.0], &[0.0, 0.0]);",
"\n\nSolver result : AlmOptimizerStatus {\n exit_status: Converged,\n num_outer_iterations: 88,\n num_inner_iterations: 1178,\n last_problem_norm_fpr: 0.0000006,\n lagrange_multipliers: Some(\n [\n 5.0298889,\n 3.9940012,\n ],\n ),\n solve_time: 1.2774560ms,\n penalty: 5737.3563215,\n delta_y_norm: 0.0425532,\n f2_norm: 0.0000000,\n}\n\nSolution u = [\n 0.910177,\n 0.414219,\n]\n"
]
],
[
[
"##### Case (2) : $\\mathbf{p} = [0.5, 0.0, 0.0]$; and $\\mathbf{c} = [-0.25, 0.0]$\n\n",
"_____no_output_____"
]
],
[
[
"main(&[0.5, 0.0, 0.0], &[-0.25, 0.0]);",
"\n\nSolver result : AlmOptimizerStatus {\n exit_status: Converged,\n num_outer_iterations: 73,\n num_inner_iterations: 249,\n last_problem_norm_fpr: 0.0000009,\n lagrange_multipliers: Some(\n [\n 5.0161011,\n 5.5350137,\n ],\n ),\n solve_time: 426.3990000µs,\n penalty: 2897.7548129,\n delta_y_norm: 0.0137157,\n f2_norm: 0.0000000,\n}\n\nSolution u = [\n 0.716497,\n 0.256679,\n]\n"
]
],
[
[
"##### Case (3) : $\\mathbf{p} = [0.5, -0.5, 0.0]$; and $\\mathbf{c} = [0.0, 0.0]$",
"_____no_output_____"
]
],
[
[
"main(&[0.5, -0.5, 0.0], &[0.0, 0.0]);",
"\n\nSolver result : AlmOptimizerStatus {\n exit_status: Converged,\n num_outer_iterations: 60,\n num_inner_iterations: 876,\n last_problem_norm_fpr: 0.0000001,\n lagrange_multipliers: Some(\n [\n 5.0008478,\n 0.8368964,\n ],\n ),\n solve_time: 951.4560000µs,\n penalty: 1463.5630916,\n delta_y_norm: 0.0116015,\n f2_norm: 0.0000000,\n}\n\nSolution u = [\n -0.992612,\n 0.121341,\n]\n"
]
],
[
[
"##### Case (4) : $\\mathbf{p} = [0.1, 0.0, 0.0]$; and $\\mathbf{c} = [0.5, 0.0]$",
"_____no_output_____"
]
],
[
[
"main(&[0.1, 0.0, 0.0], &[0.5, 0.0]);",
"\n\nSolver result : AlmOptimizerStatus {\n exit_status: Converged,\n num_outer_iterations: 97,\n num_inner_iterations: 334,\n last_problem_norm_fpr: 0.0000001,\n lagrange_multipliers: Some(\n [\n 5.0000000,\n 0.9943379,\n ],\n ),\n solve_time: 663.6030000µs,\n penalty: 9345.5488840,\n delta_y_norm: 0.0297599,\n f2_norm: 0.0000000,\n}\n\nSolution u = [\n -0.499689,\n 0.024972,\n]\n"
]
],
[
[
"##### Case (5) : $\\mathbf{p} = [-0.5, -0.5864, 0.0]$; and $\\mathbf{c} = [-\n3.0, 2.0]$",
"_____no_output_____"
]
],
[
[
"main(&[0.5, -0.5864, 0.0], &[-3.0, 2.0]);",
"\n\nSolver result : AlmOptimizerStatus {\n exit_status: Converged,\n num_outer_iterations: 15,\n num_inner_iterations: 18,\n last_problem_norm_fpr: 0.0000003,\n lagrange_multipliers: Some(\n [\n 5.0000000,\n 0.9999921,\n ],\n ),\n solve_time: 44.3680000µs,\n penalty: 179.5856326,\n delta_y_norm: 0.0000141,\n f2_norm: 0.0000000,\n}\n\nSolution u = [\n -0.586364,\n -0.000000,\n]\n"
]
],
[
[
"In this case, the circle constraint does not cover the minimum point of the cost function. Thus, the optimal value should be zero, where $u_1 = p_2$.\n\n\n",
"_____no_output_____"
],
[
"## Summary\n\n- Compared with the previous example, the solver can converge in all the test cases (In the previous example, we had to tune initial guess and max_iteration number). \n\n- Also, solution times are less than those in the preivous example. \n\n- We do not need to manually implement the gradients of the cost function and the constraints. \n\n## Future work\n\n- Let's make the `ProblemMaster` have resizable parameters (e.g. not binding to `Matrix2x1`). \n\n- Let's implement a path planner based on Model Predictive Control **without consideration of the system dynamics (only concerning the path)**\n\n",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d0c4017cd3e78e9c16f7bc3386cf742ebed32e91 | 135,056 | ipynb | Jupyter Notebook | Chapters/2.07 Eigendecomposition/2.7 Eigendecomposition.ipynb | anamulmb/Math-to-Hone-your-DS-Skill | 6d4f0bd887d7b2c51fb0e95ad4f0659513631697 | [
"MIT"
] | 30 | 2019-05-07T11:34:25.000Z | 2022-01-16T16:26:45.000Z | 01_Fundamentos/07 Eigendecomposition/2.7 Eigendecomposition.ipynb | Beholdram/UTEM-EFEB8001-Machine-Learning | ec1c65458181c76a17860207baa45738c49f9c02 | [
"MIT"
] | null | null | null | 01_Fundamentos/07 Eigendecomposition/2.7 Eigendecomposition.ipynb | Beholdram/UTEM-EFEB8001-Machine-Learning | ec1c65458181c76a17860207baa45738c49f9c02 | [
"MIT"
] | 26 | 2020-02-19T16:07:50.000Z | 2022-03-21T11:59:17.000Z | 66.628515 | 13,892 | 0.766208 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns",
"_____no_output_____"
],
[
"# Plot style\nsns.set()\n%pylab inline\npylab.rcParams['figure.figsize'] = (4, 4)\n# Avoid inaccurate floating values (for inverse matrices in dot product for instance)\n# See https://stackoverflow.com/questions/24537791/numpy-matrix-inversion-rounding-errors\nnp.set_printoptions(suppress=True)",
"Populating the interactive namespace from numpy and matplotlib\n"
],
[
"def plotVectors(vecs, cols, alpha=1):\n \"\"\"\n Plot set of vectors.\n\n Parameters\n ----------\n vecs : array-like\n Coordinates of the vectors to plot. Each vectors is in an array. For\n instance: [[1, 3], [2, 2]] can be used to plot 2 vectors.\n cols : array-like\n Colors of the vectors. For instance: ['red', 'blue'] will display the\n first vector in red and the second in blue.\n alpha : float\n Opacity of vectors\n\n Returns:\n\n fig : instance of matplotlib.figure.Figure\n The figure of the vectors\n \"\"\"\n plt.axvline(x=0, color='#A9A9A9', zorder=0)\n plt.axhline(y=0, color='#A9A9A9', zorder=0)\n\n for i in range(len(vecs)):\n if (isinstance(alpha, list)):\n alpha_i = alpha[i]\n else:\n alpha_i = alpha\n x = np.concatenate([[0,0],vecs[i]])\n plt.quiver([x[0]],\n [x[1]],\n [x[2]],\n [x[3]],\n angles='xy', scale_units='xy', scale=1, color=cols[i],\n alpha=alpha_i)",
"_____no_output_____"
]
],
[
[
"$$\n\\newcommand\\bs[1]{\\boldsymbol{#1}}\n\\newcommand\\norm[1]{\\left\\lVert#1\\right\\rVert}\n$$",
"_____no_output_____"
],
[
"# Introduction\n\nWe will see some major concepts of linear algebra in this chapter. It is also quite heavy so hang on! We will start with getting some ideas on eigenvectors and eigenvalues. We will develop on the idea that a matrix can be seen as a linear transformation and that applying a matrix on its eigenvectors gives new vectors with the same direction. Then we will see how to express quadratic equations into the matrix form. We will see that the eigendecomposition of the matrix corresponding to a quadratic equation can be used to find the minimum and maximum of this function. As a bonus, we will also see how to visualize linear transformations in Python!",
"_____no_output_____"
],
[
"# 2.7 Eigendecomposition",
"_____no_output_____"
],
[
"The eigendecomposition is one form of matrix decomposition. Decomposing a matrix means that we want to find a product of matrices that is equal to the initial matrix. In the case of the eigendecomposition, we decompose the initial matrix into the product of its eigenvectors and eigenvalues. Before all, let's see what are eigenvectors and eigenvalues.\n\n# Matrices as linear transformations\n\nAs we have seen in [2.3](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.3-Identity-and-Inverse-Matrices/) with the example of the identity matrix, you can think of matrices as linear transformations. Some matrices will rotate your space, others will rescale it etc. So when we apply a matrix to a vector, we end up with a transformed version of the vector. When we say that we 'apply' the matrix to the vector it means that we calculate the dot product of the matrix with the vector. We will start with a basic example of this kind of transformation.\n\n### Example 1.",
"_____no_output_____"
]
],
[
[
"A = np.array([[-1, 3], [2, -2]])\nA",
"_____no_output_____"
],
[
"v = np.array([[2], [1]])\nv",
"_____no_output_____"
]
],
[
[
"Let's plot this vector:",
"_____no_output_____"
]
],
[
[
"plotVectors([v.flatten()], cols=['#1190FF'])\nplt.ylim(-1, 4)\nplt.xlim(-1, 4)",
"_____no_output_____"
]
],
[
[
"Now, we will apply the matrix $\\bs{A}$ to this vector and plot the old vector (light blue) and the new one (orange):",
"_____no_output_____"
]
],
[
[
"Av = A.dot(v)\nprint(Av)\nplotVectors([v.flatten(), Av.flatten()], cols=['#1190FF', '#FF9A13'])\nplt.ylim(-1, 4)\nplt.xlim(-1, 4)",
"[[1]\n [2]]\n"
]
],
[
[
"We can see that applying the matrix $\\bs{A}$ has the effect of modifying the vector.\n\nNow that you can think of matrices as linear transformation recipes, let's see the case of a very special type of vector: the eigenvector.",
"_____no_output_____"
],
[
"# Eigenvectors and eigenvalues\n\nWe have seen an example of a vector transformed by a matrix. Now imagine that the transformation of the initial vector gives us a new vector that has the exact same direction. The scale can be different but the direction is the same. Applying the matrix didn't change the direction of the vector. This special vector is called an eigenvector of the matrix. We will see that finding the eigenvectors of a matrix can be very useful.\n\n<span class='pquote'>\n Imagine that the transformation of the initial vector by the matrix gives a new vector with the exact same direction. This vector is called an eigenvector of $\\bs{A}$.\n</span>\n\nThis means that $\\bs{v}$ is a eigenvector of $\\bs{A}$ if $\\bs{v}$ and $\\bs{Av}$ are in the same direction or to rephrase it if the vectors $\\bs{Av}$ and $\\bs{v}$ are parallel. The output vector is just a scaled version of the input vector. This scalling factor is $\\lambda$ which is called the **eigenvalue** of $\\bs{A}$.\n\n$$\n\\bs{Av} = \\lambda\\bs{v}\n$$",
"_____no_output_____"
],
[
"### Example 2.\n\nLet's $\\bs{A}$ be the following matrix:\n\n$$\n\\bs{A}=\n\\begin{bmatrix}\n 5 & 1\\\\\\\\\n 3 & 3\n\\end{bmatrix}\n$$\n\nWe know that one eigenvector of A is:\n\n$$\n\\bs{v}=\n\\begin{bmatrix}\n 1\\\\\\\\\n 1\n\\end{bmatrix}\n$$\n\nWe can check that $\\bs{Av} = \\lambda\\bs{v}$:\n\n$$\n\\begin{bmatrix}\n 5 & 1\\\\\\\\\n 3 & 3\n\\end{bmatrix}\n\\begin{bmatrix}\n 1\\\\\\\\\n 1\n\\end{bmatrix}=\\begin{bmatrix}\n 6\\\\\\\\\n 6\n\\end{bmatrix}\n$$\n\nWe can see that:\n\n$$\n6\\times \\begin{bmatrix}\n 1\\\\\\\\\n 1\n\\end{bmatrix} = \\begin{bmatrix}\n 6\\\\\\\\\n 6\n\\end{bmatrix}\n$$\n\nwhich means that $\\bs{v}$ is well an eigenvector of $\\bs{A}$. Also, the corresponding eigenvalue is $\\lambda=6$.",
"_____no_output_____"
],
[
"We can represent $\\bs{v}$ and $\\bs{Av}$ to check if their directions are the same:",
"_____no_output_____"
]
],
[
[
"A = np.array([[5, 1], [3, 3]])\nA",
"_____no_output_____"
],
[
"v = np.array([[1], [1]])\nv",
"_____no_output_____"
],
[
"Av = A.dot(v)\n\norange = '#FF9A13'\nblue = '#1190FF'\n\nplotVectors([Av.flatten(), v.flatten()], cols=[blue, orange])\nplt.ylim(-1, 7)\nplt.xlim(-1, 7)",
"_____no_output_____"
]
],
[
[
"We can see that their directions are the same!",
"_____no_output_____"
],
[
"Another eigenvector of $\\bs{A}$ is\n\n$$\n\\bs{v}=\n\\begin{bmatrix}\n 1\\\\\\\\\n -3\n\\end{bmatrix}\n$$\n\nbecause\n\n$$\n\\begin{bmatrix}\n 5 & 1\\\\\\\\\n 3 & 3\n\\end{bmatrix}\\begin{bmatrix}\n 1\\\\\\\\\n -3\n\\end{bmatrix} = \\begin{bmatrix}\n 2\\\\\\\\\n -6\n\\end{bmatrix}\n$$\n\nand\n\n$$\n2 \\times \\begin{bmatrix}\n 1\\\\\\\\\n -3\n\\end{bmatrix} =\n\\begin{bmatrix}\n 2\\\\\\\\\n -6\n\\end{bmatrix}\n$$\n\nSo the corresponding eigenvalue is $\\lambda=2$.",
"_____no_output_____"
]
],
[
[
"v = np.array([[1], [-3]])\nv",
"_____no_output_____"
],
[
"Av = A.dot(v)\n\nplotVectors([Av.flatten(), v.flatten()], cols=[blue, orange])\nplt.ylim(-7, 1)\nplt.xlim(-1, 3)",
"_____no_output_____"
]
],
[
[
"This example shows that the eigenvectors $\\bs{v}$ are vectors that change only in scale when we apply the matrix $\\bs{A}$ to them. Here the scales were 6 for the first eigenvector and 2 to the second but $\\lambda$ can take any real or even complex value.",
"_____no_output_____"
],
[
"## Find eigenvalues and eigenvectors in Python\n\nNumpy provides a function returning eigenvectors and eigenvalues (the first array corresponds to the eigenvalues and the second to the eigenvectors concatenated in columns):\n\n```python\n(array([ 6., 2.]), array([[ 0.70710678, -0.31622777],\n [ 0.70710678, 0.9486833 ]]))\n```\n\nHere a demonstration with the preceding example.",
"_____no_output_____"
]
],
[
[
"A = np.array([[5, 1], [3, 3]])\nA",
"_____no_output_____"
],
[
"np.linalg.eig(A)",
"_____no_output_____"
]
],
[
[
"We can see that the eigenvalues are the same than the ones we used before: 6 and 2 (first array).\n\nThe eigenvectors correspond to the columns of the second array. This means that the eigenvector corresponding to $\\lambda=6$ is:\n\n$$\n\\begin{bmatrix}\n 0.70710678\\\\\\\\\n 0.70710678\n\\end{bmatrix}\n$$\n\nThe eigenvector corresponding to $\\lambda=2$ is:\n\n$$\n\\begin{bmatrix}\n -0.31622777\\\\\\\\\n 0.9486833\n\\end{bmatrix}\n$$\n\nThe eigenvectors look different because they have not necessarly the same scaling than the ones we gave in the example. We can easily see that the first corresponds to a scaled version of our $\\begin{bmatrix}\n 1\\\\\\\\\n 1\n\\end{bmatrix}$. But the same property stands. We have still $\\bs{Av} = \\lambda\\bs{v}$:\n\n$$\n\\begin{bmatrix}\n 5 & 1\\\\\\\\\n 3 & 3\n\\end{bmatrix}\n\\begin{bmatrix}\n 0.70710678\\\\\\\\\n 0.70710678\n\\end{bmatrix}=\n\\begin{bmatrix}\n 4.24264069\\\\\\\\\n 4.24264069\n\\end{bmatrix}\n$$\n\nWith $0.70710678 \\times 6 = 4.24264069$. So there are an infinite number of eigenvectors corresponding to the eigenvalue $6$. They are equivalent because we are interested by their directions.\n\nFor the second eigenvector we can check that it corresponds to a scaled version of $\\begin{bmatrix}\n 1\\\\\\\\\n -3\n\\end{bmatrix}$. We can draw these vectors and see if they are parallel.",
"_____no_output_____"
]
],
[
[
"v = np.array([[1], [-3]])\nAv = A.dot(v)\nv_np = [-0.31622777, 0.9486833]\n\nplotVectors([Av.flatten(), v.flatten(), v_np], cols=[blue, orange, 'blue'])\nplt.ylim(-7, 1)\nplt.xlim(-1, 3)",
"_____no_output_____"
]
],
[
[
"We can see that the vector found with Numpy (in dark blue) is a scaled version of our preceding $\\begin{bmatrix}\n 1\\\\\\\\\n -3\n\\end{bmatrix}$.",
"_____no_output_____"
],
[
"## Rescaled vectors\n\nAs we saw it with numpy, if $\\bs{v}$ is an eigenvector of $\\bs{A}$, then any rescaled vector $s\\bs{v}$ is also an eigenvector of $\\bs{A}$. The eigenvalue of the rescaled vector is the same.\n\nLet's try to rescale\n\n$$\n\\bs{v}=\n\\begin{bmatrix}\n 1\\\\\\\\\n -3\n\\end{bmatrix}\n$$\n\nfrom our preceding example. \n\nFor instance,\n\n$$\n\\bs{3v}=\n\\begin{bmatrix}\n 3\\\\\\\\\n -9\n\\end{bmatrix}\n$$\n\n$$\n\\begin{bmatrix}\n 5 & 1\\\\\\\\\n 3 & 3\n\\end{bmatrix}\n\\begin{bmatrix}\n 3\\\\\\\\\n -9\n\\end{bmatrix} =\n\\begin{bmatrix}\n 6\\\\\\\\\n 18\n\\end{bmatrix} = 2 \\times\n\\begin{bmatrix}\n 3\\\\\\\\\n -9\n\\end{bmatrix}\n$$\n\nWe have well $\\bs{A}\\times 3\\bs{v} = \\lambda\\bs{v}$ and the eigenvalue is still $\\lambda=2$.",
"_____no_output_____"
],
[
"## Concatenating eigenvalues and eigenvectors\n\nNow that we have an idea of what eigenvectors and eigenvalues are we can see how it can be used to decompose a matrix. All eigenvectors of a matrix $\\bs{A}$ can be concatenated in a matrix with each column corresponding to each eigenvector (like in the second array return by `np.linalg.eig(A)`):\n\n$$\n\\bs{V}=\n\\begin{bmatrix}\n 1 & 1\\\\\\\\\n 1 & -3\n\\end{bmatrix}\n$$\n\nThe first column $\n\\begin{bmatrix}\n 1\\\\\\\\\n 1\n\\end{bmatrix}\n$ corresponds to $\\lambda=6$ and the second $\n\\begin{bmatrix}\n 1\\\\\\\\\n -3\n\\end{bmatrix}\n$ to $\\lambda=2$.\n\nThe vector $\\bs{\\lambda}$ can be created from all eigenvalues:\n\n$$\n\\bs{\\lambda}=\n\\begin{bmatrix}\n 6\\\\\\\\\n 2\n\\end{bmatrix}\n$$\n\nThen the eigendecomposition is given by\n\n$$\n\\bs{A}=\\bs{V}\\cdot diag(\\bs{\\lambda}) \\cdot \\bs{V}^{-1}\n$$\n\n<span class='pquote'>\n We can decompose the matrix $\\bs{A}$ with eigenvectors and eigenvalues. It is done with: $\\bs{A}=\\bs{V}\\cdot diag(\\bs{\\lambda}) \\cdot \\bs{V}^{-1}$\n</span>\n\n$diag(\\bs{v})$ is a diagonal matrix (see [2.6](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.6-Special-Kinds-of-Matrices-and-Vectors/)) containing all the eigenvalues. Continuing with our example we have\n\n$$\n\\bs{V}=\\begin{bmatrix}\n 1 & 1\\\\\\\\\n 1 & -3\n\\end{bmatrix}\n$$\n\nThe diagonal matrix is all zeros except the diagonal that is our vector $\\bs{\\lambda}$.\n\n$$\ndiag(\\bs{v})=\n\\begin{bmatrix}\n 6 & 0\\\\\\\\\n 0 & 2\n\\end{bmatrix}\n$$\n\nThe inverse matrix of $\\bs{V}$ can be calculated with numpy:",
"_____no_output_____"
]
],
[
[
"V = np.array([[1, 1], [1, -3]])\nV",
"_____no_output_____"
],
[
"V_inv = np.linalg.inv(V)\nV_inv",
"_____no_output_____"
]
],
[
[
"So let's plug\n\n$$\n\\bs{V}^{-1}=\\begin{bmatrix}\n 0.75 & 0.25\\\\\\\\\n 0.25 & -0.25\n\\end{bmatrix}\n$$\n\ninto our equation:\n\n$$\n\\begin{align*}\n&\\bs{V}\\cdot diag(\\bs{\\lambda}) \\cdot \\bs{V}^{-1}\\\\\\\\\n&=\n\\begin{bmatrix}\n 1 & 1\\\\\\\\\n 1 & -3\n\\end{bmatrix}\n\\begin{bmatrix}\n 6 & 0\\\\\\\\\n 0 & 2\n\\end{bmatrix}\n\\begin{bmatrix}\n 0.75 & 0.25\\\\\\\\\n 0.25 & -0.25\n\\end{bmatrix}\n\\end{align*}\n$$\n\nIf we do the dot product of the first two matrices we have:\n\n$$\n\\begin{bmatrix}\n 1 & 1\\\\\\\\\n 1 & -3\n\\end{bmatrix}\n\\begin{bmatrix}\n 6 & 0\\\\\\\\\n 0 & 2\n\\end{bmatrix} =\n\\begin{bmatrix}\n 6 & 2\\\\\\\\\n 6 & -6\n\\end{bmatrix}\n$$\n\nSo with replacing into the equation:\n\n$$\n\\begin{align*}\n&\\begin{bmatrix}\n 6 & 2\\\\\\\\\n 6 & -6\n\\end{bmatrix}\n\\begin{bmatrix}\n 0.75 & 0.25\\\\\\\\\n 0.25 & -0.25\n\\end{bmatrix}\\\\\\\\\n&=\n\\begin{bmatrix}\n 6\\times0.75 + (2\\times0.25) & 6\\times0.25 + (2\\times-0.25)\\\\\\\\\n 6\\times0.75 + (-6\\times0.25) & 6\\times0.25 + (-6\\times-0.25)\n\\end{bmatrix}\\\\\\\\\n&=\n\\begin{bmatrix}\n 5 & 1\\\\\\\\\n 3 & 3\n\\end{bmatrix}=\n\\bs{A}\n\\end{align*}\n$$\n\nLet's check our result with Python:",
"_____no_output_____"
]
],
[
[
"lambdas = np.diag([6,2])\nlambdas",
"_____no_output_____"
],
[
"V.dot(lambdas).dot(V_inv)",
"_____no_output_____"
]
],
[
[
"That confirms our previous calculation. ",
"_____no_output_____"
],
[
"## Real symmetric matrix\n\nIn the case of real symmetric matrices (more details about symmetric matrices in [2.6](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.6-Special-Kinds-of-Matrices-and-Vectors/)), the eigendecomposition can be expressed as\n\n$$\n\\bs{A} = \\bs{Q}\\Lambda \\bs{Q}^\\text{T}\n$$\n\nwhere $\\bs{Q}$ is the matrix with eigenvectors as columns and $\\Lambda$ is $diag(\\lambda)$.\n\n### Example 3.\n\n$$\n\\bs{A}=\\begin{bmatrix}\n 6 & 2\\\\\\\\\n 2 & 3\n\\end{bmatrix}\n$$\n\nThis matrix is symmetric because $\\bs{A}=\\bs{A}^\\text{T}$. Its eigenvectors are:\n\n$$\n\\bs{Q}=\n\\begin{bmatrix}\n 0.89442719 & -0.4472136\\\\\\\\\n 0.4472136 & 0.89442719\n\\end{bmatrix}\n$$\n\nand its eigenvalues put in a diagonal matrix gives:\n\n$$\n\\bs{\\Lambda}=\n\\begin{bmatrix}\n 7 & 0\\\\\\\\\n 0 & 2\n\\end{bmatrix}\n$$\n\nSo let's begin to calculate $\\bs{Q\\Lambda}$:\n\n$$\n\\begin{align*}\n\\bs{Q\\Lambda}&=\n\\begin{bmatrix}\n 0.89442719 & -0.4472136\\\\\\\\\n 0.4472136 & 0.89442719\n\\end{bmatrix}\n\\begin{bmatrix}\n 7 & 0\\\\\\\\\n 0 & 2\n\\end{bmatrix}\\\\\\\\\n&=\n\\begin{bmatrix}\n 0.89442719 \\times 7 & -0.4472136\\times 2\\\\\\\\\n 0.4472136 \\times 7 & 0.89442719\\times 2\n\\end{bmatrix}\\\\\\\\\n&=\n\\begin{bmatrix}\n 6.26099033 & -0.8944272\\\\\\\\\n 3.1304952 & 1.78885438\n\\end{bmatrix}\n\\end{align*}\n$$\n\nwith:\n\n$$\n\\bs{Q}^\\text{T}=\n\\begin{bmatrix}\n 0.89442719 & 0.4472136\\\\\\\\\n -0.4472136 & 0.89442719\n\\end{bmatrix}\n$$\n\nSo we have:\n\n$$\n\\begin{align*}\n\\bs{Q\\Lambda} \\bs{Q}^\\text{T}&=\n\\begin{bmatrix}\n 6.26099033 & -0.8944272\\\\\\\\\n 3.1304952 & 1.78885438\n\\end{bmatrix}\n\\begin{bmatrix}\n 0.89442719 & 0.4472136\\\\\\\\\n -0.4472136 & 0.89442719\n\\end{bmatrix}\\\\\\\\\n&=\n\\begin{bmatrix}\n 6 & 2\\\\\\\\\n 2 & 3\n\\end{bmatrix}\n\\end{align*}\n$$\n\nIt works! For that reason, it can useful to use symmetric matrices! Let's do the same things easily with `linalg` from numpy:",
"_____no_output_____"
]
],
[
[
"A = np.array([[6, 2], [2, 3]])\nA",
"_____no_output_____"
],
[
"eigVals, eigVecs = np.linalg.eig(A)\neigVecs",
"_____no_output_____"
],
[
"eigVals = np.diag(eigVals)\neigVals",
"_____no_output_____"
],
[
"eigVecs.dot(eigVals).dot(eigVecs.T)",
"_____no_output_____"
]
],
[
[
"We can see that the result corresponds to our initial matrix.",
"_____no_output_____"
],
[
"# Quadratic form to matrix form\n\nEigendecomposition can be used to optimize quadratic functions. We will see that when $\\bs{x}$ takes the values of an eigenvector, $f(\\bs{x})$ takes the value of its corresponding eigenvalue.\n\n<span class='pquote'>\n When $\\bs{x}$ takes the values of an eigenvector, $f(\\bs{x})$ takes the value of its corresponding eigenvalue.\n</span>\n\nWe will see in the following points how we can show that with different methods.\n\nLet's have the following quadratic equation:\n\n$$\nf(\\bs{x}) = ax_1^2 +(b+c)x_1x_2 + dx_2^2\n$$\n\nThese quadratic forms can be generated by matrices:\n\n$$\nf(\\bs{x})= \\begin{bmatrix}\n x_1 & x_2\n\\end{bmatrix}\\begin{bmatrix}\n a & b\\\\\\\\\n c & d\n\\end{bmatrix}\\begin{bmatrix}\n x_1\\\\\\\\\n x_2\n\\end{bmatrix} = \\bs{x^\\text{T}Ax}\n$$\n\nwith:\n\n$$\n\\bs{x} = \\begin{bmatrix}\n x_1\\\\\\\\\n x_2\n\\end{bmatrix}\n$$\n\nand\n\n$$\n\\bs{A}=\\begin{bmatrix}\n a & b\\\\\\\\\n c & d\n\\end{bmatrix}\n$$\n\nWe call them matrix forms. This form is useful to do various things on the quadratic equation like constrained optimization (see bellow).\n\n<span class='pquote'>\n Quadratic equations can be expressed under the matrix form\n</span>\n\nIf you look at the relation between these forms you can see that $a$ gives you the number of $x_1^2$, $(b + c)$ the number of $x_1x_2$ and $d$ the number of $x_2^2$. This means that the same quadratic form can be obtained from infinite number of matrices $\\bs{A}$ by changing $b$ and $c$ while preserving their sum.\n\n### Example 4.\n\n$$\n\\bs{x} = \\begin{bmatrix}\n x_1\\\\\\\\\n x_2\n\\end{bmatrix}\n$$\n\nand\n\n$$\n\\bs{A}=\\begin{bmatrix}\n 2 & 4\\\\\\\\\n 2 & 5\n\\end{bmatrix}\n$$\n\ngives the following quadratic form:\n\n$$\n2x_1^2 + (4+2)x_1x_2 + 5x_2^2\\\\\\\\=2x_1^2 + 6x_1x_2 + 5x_2^2\n$$\n\nbut if:\n\n$$\n\\bs{A}=\\begin{bmatrix}\n 2 & -3\\\\\\\\\n 9 & 5\n\\end{bmatrix}\n$$\n\nwe still have the quadratic same form:\n\n$$\n2x_1^2 + (-3+9)x_1x_2 + 5x_2^2\\\\\\\\=2x_1^2 + 6x_1x_2 + 5x_2^2\n$$",
"_____no_output_____"
],
[
"### Example 5\n\nFor this example, we will go from the matrix form to the quadratic form using a symmetric matrix $\\bs{A}$. Let's use the matrix of the example 3.\n\n$$\n\\bs{x} = \\begin{bmatrix}\n x_1\\\\\\\\\n x_2\n\\end{bmatrix}\n$$\n\nand\n\n$$\\bs{A}=\\begin{bmatrix}\n 6 & 2\\\\\\\\\n 2 & 3\n\\end{bmatrix}\n$$\n\n$$\n\\begin{align*}\n\\bs{x^\\text{T}Ax}&=\n\\begin{bmatrix}\n x_1 & x_2\n\\end{bmatrix}\n\\begin{bmatrix}\n 6 & 2\\\\\\\\\n 2 & 3\n\\end{bmatrix}\n\\begin{bmatrix}\n x_1\\\\\\\\\n x_2\n\\end{bmatrix}\\\\\\\\\n&=\n\\begin{bmatrix}\n x_1 & x_2\n\\end{bmatrix}\n\\begin{bmatrix}\n 6 x_1 + 2 x_2\\\\\\\\\n 2 x_1 + 3 x_2\n\\end{bmatrix}\\\\\\\\\n&=\nx_1(6 x_1 + 2 x_2) + x_2(2 x_1 + 3 x_2)\\\\\\\\\n&=\n6 x_1^2 + 4 x_1x_2 + 3 x_2^2\n\\end{align*}\n$$\n\nOur quadratic equation is thus $6 x_1^2 + 4 x_1x_2 + 3 x_2^2$.\n\n### Note\n\nIf $\\bs{A}$ is a diagonal matrix (all 0 except the diagonal), the quadratic form of $\\bs{x^\\text{T}Ax}$ will have no cross term. Take the following matrix form:\n\n$$\n\\bs{A}=\\begin{bmatrix}\n a & b\\\\\\\\\n c & d\n\\end{bmatrix}\n$$\n\nIf $\\bs{A}$ is diagonal, then $b$ and $c$ are 0 and since $f(\\bs{x}) = ax_1^2 +(b+c)x_1x_2 + dx_2^2$ there is no cross term. A quadratic form without cross term is called diagonal form since it comes from a diagonal matrix.",
"_____no_output_____"
],
[
"# Change of variable \n\nA change of variable (or linear substitution) simply means that we replace a variable by another one. We will see that it can be used to remove the cross terms in our quadratic equation. Without the cross term, it will then be easier to characterize the function and eventually optimize it (i.e finding its maximum or minimum).\n\n## With the quadratic form\n\n### Example 6.\n\nLet's take again our previous quadratic form:\n\n$$\n\\bs{x^\\text{T}Ax} = 6 x_1^2 + 4 x_1x_2 + 3 x_2^2\n$$\n\nThe change of variable will concern $x_1$ and $x_2$. We can replace $x_1$ with any combination of $y_1$ and $y_2$ and $x_2$ with any combination $y_1$ and $y_2$. We will of course end up with a new equation. The nice thing is that we can find a specific substitution that will lead to a simplification of our statement. Specifically, it can be used to get rid of the cross term (in our example: $4 x_1x_2$). We will see later why it is interesting.\n\nActually, the right substitution is given by the eigenvectors of the matrix used to generate the quadratic form. Let's recall that the matrix form of our equation is:\n\n$$\n\\bs{x} = \\begin{bmatrix}\n x_1\\\\\\\\\n x_2\n\\end{bmatrix}\n$$\n\nand\n\n$$\\bs{A}=\\begin{bmatrix}\n 6 & 2\\\\\\\\\n 2 & 3\n\\end{bmatrix}\n$$\n\nand that the eigenvectors of $\\bs{A}$ are:\n\n$$\n\\begin{bmatrix}\n 0.89442719 & -0.4472136\\\\\\\\\n 0.4472136 & 0.89442719\n\\end{bmatrix}\n$$\n\nWith the purpose of simplification, we can replace these values with:\n\n$$\n\\begin{bmatrix}\n \\frac{2}{\\sqrt{5}} & -\\frac{1}{\\sqrt{5}}\\\\\\\\\n \\frac{1}{\\sqrt{5}} & \\frac{2}{\\sqrt{5}}\n\\end{bmatrix} =\n\\frac{1}{\\sqrt{5}}\n\\begin{bmatrix}\n 2 & -1\\\\\\\\\n 1 & 2\n\\end{bmatrix}\n$$\n\nSo our first eigenvector is:\n\n$$\n\\frac{1}{\\sqrt{5}}\n\\begin{bmatrix}\n 2\\\\\\\\\n 1\n\\end{bmatrix}\n$$\n\nand our second eigenvector is:\n\n$$\n\\frac{1}{\\sqrt{5}}\n\\begin{bmatrix}\n -1\\\\\\\\\n 2\n\\end{bmatrix}\n$$\n\nThe change of variable will lead to:\n\n$$\n\\begin{bmatrix}\n x_1\\\\\\\\\n x_2\n\\end{bmatrix} =\n\\frac{1}{\\sqrt{5}}\n\\begin{bmatrix}\n 2 & -1\\\\\\\\\n 1 & 2\n\\end{bmatrix}\n\\begin{bmatrix}\n y_1\\\\\\\\\n y_2\n\\end{bmatrix} =\n\\frac{1}{\\sqrt{5}}\n\\begin{bmatrix}\n 2y_1 - y_2\\\\\\\\\n y_1 + 2y_2\n\\end{bmatrix}\n$$\n\nso we have\n\n$$\n\\begin{cases}\nx_1 = \\frac{1}{\\sqrt{5}}(2y_1 - y_2)\\\\\\\\\nx_2 = \\frac{1}{\\sqrt{5}}(y_1 + 2y_2)\n\\end{cases}\n$$\n\nSo far so good! Let's replace that in our example:\n\n$$\n\\begin{align*}\n\\bs{x^\\text{T}Ax}\n&=\n6 x_1^2 + 4 x_1x_2 + 3 x_2^2\\\\\\\\\n&=\n6 [\\frac{1}{\\sqrt{5}}(2y_1 - y_2)]^2 + 4 [\\frac{1}{\\sqrt{5}}(2y_1 - y_2)\\frac{1}{\\sqrt{5}}(y_1 + 2y_2)] + 3 [\\frac{1}{\\sqrt{5}}(y_1 + 2y_2)]^2\\\\\\\\\n&=\n\\frac{1}{5}[6 (2y_1 - y_2)^2 + 4 (2y_1 - y_2)(y_1 + 2y_2) + 3 (y_1 + 2y_2)^2]\\\\\\\\\n&=\n\\frac{1}{5}[6 (4y_1^2 - 4y_1y_2 + y_2^2) + 4 (2y_1^2 + 4y_1y_2 - y_1y_2 - 2y_2^2) + 3 (y_1^2 + 4y_1y_2 + 4y_2^2)]\\\\\\\\\n&=\n\\frac{1}{5}(24y_1^2 - 24y_1y_2 + 6y_2^2 + 8y_1^2 + 16y_1y_2 - 4y_1y_2 - 8y_2^2 + 3y_1^2 + 12y_1y_2 + 12y_2^2)\\\\\\\\\n&=\n\\frac{1}{5}(35y_1^2 + 10y_2^2)\\\\\\\\\n&=\n7y_1^2 + 2y_2^2\n\\end{align*}\n$$\n\nThat's great! Our new equation doesn't have any cross terms!",
"_____no_output_____"
],
[
"## With the Principal Axes Theorem\n\nActually there is a simpler way to do the change of variable. We can stay in the matrix form. Recall that we start with the form:\n\n<div>\n$$\nf(\\bs{x})=\\bs{x^\\text{T}Ax}\n$$\n</div>\n\nThe linear substitution can be wrote in these terms. We want replace the variables $\\bs{x}$ by $\\bs{y}$ that relates by:\n\n<div>\n$$\n\\bs{x}=P\\bs{y}\n$$\n</div>\n\nWe want to find $P$ such as our new equation (after the change of variable) doesn't contain the cross terms. The first step is to replace that in the first equation:\n\n<div>\n$$\n\\begin{align*}\n\\bs{x^\\text{T}Ax}\n&=\n(\\bs{Py})^\\text{T}\\bs{A}(\\bs{Py})\\\\\\\\\n&=\n\\bs{y}^\\text{T}(\\bs{P}^\\text{T}\\bs{AP})\\bs{y}\n\\end{align*}\n$$\n</div>\n\nCan you see the how to transform the left hand side ($\\bs{x}$) into the right hand side ($\\bs{y}$)? The substitution is done by replacing $\\bs{A}$ with $\\bs{P^\\text{T}AP}$. We also know that $\\bs{A}$ is symmetric and thus that there is a diagonal matrix $\\bs{D}$ containing the eigenvectors of $\\bs{A}$ and such as $\\bs{D}=\\bs{P}^\\text{T}\\bs{AP}$. We thus end up with:\n\n<div>\n$$\n\\bs{x^\\text{T}Ax}=\\bs{y^\\text{T}\\bs{D} y}\n$$\n</div>\n\n<span class='pquote'>\n We can use $\\bs{D}$ to simplify our quadratic equation and remove the cross terms\n</span>\n\nAll of this implies that we can use $\\bs{D}$ to simplify our quadratic equation and remove the cross terms. If you remember from example 2 we know that the eigenvalues of $\\bs{A}$ are:\n\n<div>\n$$\n\\bs{D}=\n\\begin{bmatrix}\n 7 & 0\\\\\\\\\n 0 & 2\n\\end{bmatrix}\n$$\n</div>\n\n<div>\n$$\n\\begin{align*}\n\\bs{x^\\text{T}Ax}\n&=\n\\bs{y^\\text{T}\\bs{D} y}\\\\\\\\\n&=\n\\bs{y}^\\text{T}\n\\begin{bmatrix}\n 7 & 0\\\\\\\\\n 0 & 2\n\\end{bmatrix}\n\\bs{y}\\\\\\\\\n&=\n\\begin{bmatrix}\n y_1 & y_2\n\\end{bmatrix}\n\\begin{bmatrix}\n 7 & 0\\\\\\\\\n 0 & 2\n\\end{bmatrix}\n\\begin{bmatrix}\n y_1\\\\\\\\\n y_2\n\\end{bmatrix}\\\\\\\\\n&=\n\\begin{bmatrix}\n 7y_1 +0y_2 & 0y_1 + 2y_2\n\\end{bmatrix}\n\\begin{bmatrix}\n y_1\\\\\\\\\n y_2\n\\end{bmatrix}\\\\\\\\\n&=\n7y_1^2 + 2y_2^2\n\\end{align*}\n$$\n</div>\n\nThat's nice! If you look back to the change of variable that we have done in the quadratic form, you will see that we have found the same values!\n\nThis form (without cross-term) is called the **principal axes form**.\n\n### Summary\n\nTo summarise, the principal axes form can be found with\n\n$$\n\\bs{x^\\text{T}Ax} = \\lambda_1y_1^2 + \\lambda_2y_2^2\n$$\n\nwhere $\\lambda_1$ is the eigenvalue corresponding to the first eigenvector and $\\lambda_2$ the eigenvalue corresponding to the second eigenvector (second column of $\\bs{x}$).",
"_____no_output_____"
],
[
"# Finding f(x) with eigendecomposition\n\nWe will see that there is a way to find $f(\\bs{x})$ with eigenvectors and eigenvalues when $\\bs{x}$ is a unit vector. \n\nLet's start from:\n\n$$\nf(\\bs{x}) =\\bs{x^\\text{T}Ax}\n$$\n\nWe know that if $\\bs{x}$ is an eigenvector of $\\bs{A}$ and $\\lambda$ the corresponding eigenvalue, then $\n\\bs{Ax}=\\lambda \\bs{x}\n$. By replacing the term in the last equation we have:\n\n$$\nf(\\bs{x}) =\\bs{x^\\text{T}\\lambda x} = \\bs{x^\\text{T}x}\\lambda\n$$\n\nSince $\\bs{x}$ is a unit vector, $\\norm{\\bs{x}}_2=1$ and $\\bs{x^\\text{T}x}=1$ (cf. [2.5](https://hadrienj.github.io/posts/Deep-Learning-Book-Series-2.5-Norms/) Norms). We end up with\n\n$$\nf(\\bs{x}) = \\lambda\n$$\n\nThis is a usefull property. If $\\bs{x}$ is an eigenvector of $\\bs{A}$, $\nf(\\bs{x}) =\\bs{x^\\text{T}Ax}$ will take the value of the corresponding eigenvalue. We can see that this is working only if the euclidean norm of $\\bs{x}$ is 1 (i.e $\\bs{x}$ is a unit vector).\n\n### Example 7\n\nThis example will show that $f(\\bs{x}) = \\lambda$. Let's take again the last example, the eigenvectors of $\\bs{A}$ were\n\n$$\n\\bs{Q}=\n\\begin{bmatrix}\n 0.89442719 & -0.4472136\\\\\\\\\n 0.4472136 & 0.89442719\n\\end{bmatrix}\n$$\n\nand the eigenvalues\n\n$$\n\\bs{\\Lambda}=\n\\begin{bmatrix}\n 7 & 0\\\\\\\\\n 0 & 2\n\\end{bmatrix}\n$$\n\nSo if:\n\n$$\n\\bs{x}=\\begin{bmatrix}\n 0.89442719 & 0.4472136\n\\end{bmatrix}\n$$\n\n$f(\\bs{x})$ should be equal to 7. Let's check that's true.\n\n$$\n\\begin{align*}\nf(\\bs{x}) &= 6 x_1^2 + 4 x_1x_2 + 3 x_2^2\\\\\\\\\n&= 6\\times 0.89442719^2 + 4\\times 0.89442719\\times 0.4472136 + 3 \\times 0.4472136^2\\\\\\\\\n&= 7\n\\end{align*}\n$$\n\nIn the same way, if $\\bs{x}=\\begin{bmatrix}\n -0.4472136 & 0.89442719\n\\end{bmatrix}$, $f(\\bs{x})$ should be equal to 2.\n\n$$\n\\begin{align*}\nf(\\bs{x}) &= 6 x_1^2 + 4 x_1x_2 + 3 x_2^2\\\\\\\\\n&= 6\\times -0.4472136^2 + 4\\times -0.4472136\\times 0.89442719 + 3 \\times 0.89442719^2\\\\\\\\\n&= 2\n\\end{align*}\n$$",
"_____no_output_____"
],
[
"# Quadratic form optimization\n\nDepending to the context, optimizing a function means finding its maximum or its minimum. It is for instance widely used to minimize the error of cost functions in machine learning.\n\nHere we will see how eigendecomposition can be used to optimize quadratic functions and why this can be done easily without cross terms. The difficulty is that we want a constrained optimization, that is to find the minimum or the maximum of the function for $f(\\bs{x})$ being a unit vector.\n\n### Example 7.\n\nWe want to optimize:\n\n$$\nf(\\bs{x}) =\\bs{x^\\text{T}Ax} \\textrm{ subject to }||\\bs{x}||_2= 1\n$$\n\nIn our last example we ended up with:\n\n$$\nf(\\bs{x}) = 7y_1^2 + 2y_2^2\n$$\n\nAnd the constraint of $\\bs{x}$ being a unit vector imply:\n\n$$\n||\\bs{x}||_2 = 1 \\Leftrightarrow x_1^2 + x_2^2 = 1\n$$\n\nWe can also show that $\\bs{y}$ has to be a unit vector if it is the case for $\\bs{x}$. Recall first that $\\bs{x}=\\bs{Py}$:\n\n$$\n\\begin{align*}\n||\\bs{x}||^2 &= \\bs{x^\\text{T}x}\\\\\\\\\n&= (\\bs{Py})^\\text{T}(\\bs{Py})\\\\\\\\\n&= \\bs{P^\\text{T}y^\\text{T}Py}\\\\\\\\\n&= \\bs{PP^\\text{T}y^\\text{T}y}\\\\\\\\\n&= \\bs{y^\\text{T}y} = ||\\bs{y}||^2\n\\end{align*}\n$$\n\nSo $\\norm{\\bs{x}}^2 = \\norm{\\bs{y}}^2 = 1$ and thus $y_1^2 + y_2^2 = 1$\n\nSince $y_1^2$ and $y_2^2$ cannot be negative because they are squared values, we can be sure that $2y_2^2\\leq7y_2^2$. Hence:\n\n$$\n\\begin{align*}\nf(\\bs{x}) &= 7y_1^2 + 2y_2^2\\\\\\\\\n&\\leq\n7y_1^2 + 7y_2^2\\\\\\\\\n&=\n7(y_1^2+y_2^2)\\\\\\\\\n&=\n7\n\\end{align*}\n$$\n\nThis means that the maximum value of $f(\\bs{x})$ is 7.\n\nThe same way can lead to find the minimum of $f(\\bs{x})$. $7y_1^2\\geq2y_1^2$ and:\n\n$$\n\\begin{align*}\nf(\\bs{x}) &= 7y_1^2 + 2y_2^2\\\\\\\\\n&\\geq\n2y_1^2 + 2y_2^2\\\\\\\\\n&=\n2(y_1^2+y_2^2)\\\\\\\\\n&=\n2\n\\end{align*}\n$$\n\nAnd the minimum of $f(\\bs{x})$ is 2.\n\n### Summary\n\nWe can note that the minimum of $f(\\bs{x})$ is the minimum eigenvalue of the corresponding matrix $\\bs{A}$. Another useful fact is that this value is obtained when $\\bs{x}$ takes the value of the corresponding eigenvector (check back the preceding paragraph). In that way, $f(\\bs{x})=7$ when $\\bs{x}=\\begin{bmatrix}0.89442719 & 0.4472136\\end{bmatrix}$. This shows how useful are the eigenvalues and eigenvector in this kind of constrained optimization.",
"_____no_output_____"
],
[
"## Graphical views\n\nWe saw that the quadratic functions $f(\\bs{x}) = ax_1^2 +2bx_1x_2 + cx_2^2$ can be represented by the symmetric matrix $\\bs{A}$:\n\n$$\n\\bs{A}=\\begin{bmatrix}\n a & b\\\\\\\\\n b & c\n\\end{bmatrix}\n$$\n\nGraphically, these functions can take one of three general shapes (click on the links to go to the Surface Plotter and move the shapes):\n\n1.[Positive-definite form](https://academo.org/demos/3d-surface-plotter/?expression=x*x%2By*y&xRange=-50%2C+50&yRange=-50%2C+50&resolution=49) | 2.[Negative-definite form](https://academo.org/demos/3d-surface-plotter/?expression=-x*x-y*y&xRange=-50%2C+50&yRange=-50%2C+50&resolution=25) | 3.[Indefinite form](https://academo.org/demos/3d-surface-plotter/?expression=x*x-y*y&xRange=-50%2C+50&yRange=-50%2C+50&resolution=49)\n:-------------------------:|:-------------------------:|:-------:\n<img src=\"images/quadratic-functions-positive-definite-form.png\" alt=\"Quadratic function with a positive definite form\" title=\"Quadratic function with a positive definite form\"> | <img src=\"images/quadratic-functions-negative-definite-form.png\" alt=\"Quadratic function with a negative definite form\" title=\"Quadratic function with a negative definite form\"> | <img src=\"images/quadratic-functions-indefinite-form.png\" alt=\"Quadratic function with a indefinite form\" title=\"Quadratic function with a indefinite form\">\n\n\n\nWith the constraints that $\\bs{x}$ is a unit vector, the minimum of the function $f(\\bs{x})$ corresponds to the smallest eigenvalue and is obtained with its corresponding eigenvector. The maximum corresponds to the biggest eigenvalue and is obtained with its corresponding eigenvector.",
"_____no_output_____"
],
[
"# Conclusion\n\nWe have seen a lot of things in this chapter. We saw that linear algebra can be used to solve a variety of mathematical problems and more specifically that eigendecomposition is a powerful tool! However, it cannot be used for non square matrices. In the next chapter, we will see the Singular Value Decomposition (SVD) which is another way of decomposing matrices. The advantage of the SVD is that you can use it also with non-square matrices.",
"_____no_output_____"
],
[
"# BONUS: visualizing linear transformations\n\nWe can see the effect of eigenvectors and eigenvalues in linear transformation. We will see first how linear transformation works. Linear transformation is a mapping between an input vector and an output vector. Different operations like projection or rotation are linear transformations. Every linear transformations can be though as applying a matrix on the input vector. We will see the meaning of this graphically. For that purpose, let's start by drawing the set of unit vectors (they are all vectors with a norm of 1).",
"_____no_output_____"
]
],
[
[
"t = np.linspace(0, 2*np.pi, 100)\nx = np.cos(t)\ny = np.sin(t)\n\nplt.figure()\nplt.plot(x, y)\nplt.xlim(-1.5, 1.5)\nplt.ylim(-1.5, 1.5)\nplt.show()",
"_____no_output_____"
]
],
[
[
"Then, we will transform each of these points by applying a matrix $\\bs{A}$. This is the goal of the function bellow that takes a matrix as input and will draw\n\n- the origin set of unit vectors\n- the transformed set of unit vectors\n- the eigenvectors\n- the eigenvectors scalled by their eigenvalues",
"_____no_output_____"
]
],
[
[
"def linearTransformation(transformMatrix):\n orange = '#FF9A13'\n blue = '#1190FF'\n # Create original set of unit vectors\n t = np.linspace(0, 2*np.pi, 100)\n x = np.cos(t)\n y = np.sin(t)\n\n # Calculate eigenvectors and eigenvalues\n eigVecs = np.linalg.eig(transformMatrix)[1]\n eigVals = np.diag(np.linalg.eig(transformMatrix)[0])\n \n # Create vectors of 0 to store new transformed values\n newX = np.zeros(len(x))\n newY = np.zeros(len(x))\n for i in range(len(x)):\n unitVector_i = np.array([x[i], y[i]])\n # Apply the matrix to the vector\n newXY = transformMatrix.dot(unitVector_i)\n newX[i] = newXY[0]\n newY[i] = newXY[1]\n \n plotVectors([eigVecs[:,0], eigVecs[:,1]],\n cols=[blue, blue])\n plt.plot(x, y)\n\n plotVectors([eigVals[0,0]*eigVecs[:,0], eigVals[1,1]*eigVecs[:,1]],\n cols=[orange, orange])\n plt.plot(newX, newY)\n plt.xlim(-5, 5)\n plt.ylim(-5, 5)\n plt.show()",
"_____no_output_____"
],
[
"A = np.array([[1,-1], [-1, 4]])\nlinearTransformation(A)",
"_____no_output_____"
]
],
[
[
"We can see the unit circle in dark blue, the non scaled eigenvectors in light blue, the transformed unit circle in green and the scaled eigenvectors in yellow.\n\nIt is worth noting that the eigenvectors are orthogonal here because the matrix is symmetric. Let's try with a non-symmetric matrix:",
"_____no_output_____"
]
],
[
[
"A = np.array([[1,1], [-1, 4]])\nlinearTransformation(A)",
"_____no_output_____"
]
],
[
[
"In this case, the eigenvectors are not orthogonal!",
"_____no_output_____"
],
[
"# References\n\n## Videos of Gilbert Strang\n\n- [Gilbert Strang, Lec21 MIT - Eigenvalues and eigenvectors](https://www.youtube.com/watch?v=lXNXrLcoerU)\n\n- [Gilbert Strang, Lec 21 MIT, Spring 2005](https://www.youtube.com/watch?v=lXNXrLcoerU)\n\n## Quadratic forms\n\n- [David Lay, University of Colorado, Denver](http://math.ucdenver.edu/~esulliva/LinearAlgebra/SlideShows/07_02.pdf)\n\n- [math.stackexchange QA](https://math.stackexchange.com/questions/2207111/eigendecomposition-optimization-of-quadratic-expressions)\n\n## Eigenvectors\n\n- [Victor Powell and Lewis Lehe - Interactive representation of eigenvectors](http://setosa.io/ev/eigenvectors-and-eigenvalues/)\n\n## Linear transformations\n\n- [Gilbert Strang - Linear transformation](http://ia802205.us.archive.org/18/items/MIT18.06S05_MP4/30.mp4)\n\n- [Linear transformation - demo video](https://www.youtube.com/watch?v=wXCRcnbCsJA)",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d0c401b3c9b3b86ebd1ba959012d4538bdaac282 | 5,896 | ipynb | Jupyter Notebook | Machine_Learning/Linear_Regression/Project.ipynb | nardao666/spark | 95161deca4f027cfa436f7ec023faaf0337562b1 | [
"Apache-2.0"
] | null | null | null | Machine_Learning/Linear_Regression/Project.ipynb | nardao666/spark | 95161deca4f027cfa436f7ec023faaf0337562b1 | [
"Apache-2.0"
] | null | null | null | Machine_Learning/Linear_Regression/Project.ipynb | nardao666/spark | 95161deca4f027cfa436f7ec023faaf0337562b1 | [
"Apache-2.0"
] | null | null | null | 30.081633 | 107 | 0.427239 | [
[
[
"import findspark",
"_____no_output_____"
],
[
"findspark.init('/home/nardao666/spark-2.4.0-bin-hadoop2.7/')",
"_____no_output_____"
],
[
"import pyspark",
"_____no_output_____"
],
[
"from pyspark.sql import SparkSession",
"_____no_output_____"
],
[
"spark = SparkSession.builder.appName('ship').getOrCreate()",
"_____no_output_____"
],
[
"from pyspark.ml.regression import LinearRegression",
"_____no_output_____"
],
[
"training = spark.read.csv('cruise_ship_info.csv', inferSchema = True, header = True)",
"_____no_output_____"
],
[
"training.show()",
"+-----------+-----------+---+------------------+----------+------+------+-----------------+----+\n| Ship_name|Cruise_line|Age| Tonnage|passengers|length|cabins|passenger_density|crew|\n+-----------+-----------+---+------------------+----------+------+------+-----------------+----+\n| Journey| Azamara| 6|30.276999999999997| 6.94| 5.94| 3.55| 42.64|3.55|\n| Quest| Azamara| 6|30.276999999999997| 6.94| 5.94| 3.55| 42.64|3.55|\n|Celebration| Carnival| 26| 47.262| 14.86| 7.22| 7.43| 31.8| 6.7|\n| Conquest| Carnival| 11| 110.0| 29.74| 9.53| 14.88| 36.99|19.1|\n| Destiny| Carnival| 17| 101.353| 26.42| 8.92| 13.21| 38.36|10.0|\n| Ecstasy| Carnival| 22| 70.367| 20.52| 8.55| 10.2| 34.29| 9.2|\n| Elation| Carnival| 15| 70.367| 20.52| 8.55| 10.2| 34.29| 9.2|\n| Fantasy| Carnival| 23| 70.367| 20.56| 8.55| 10.22| 34.23| 9.2|\n|Fascination| Carnival| 19| 70.367| 20.52| 8.55| 10.2| 34.29| 9.2|\n| Freedom| Carnival| 6|110.23899999999999| 37.0| 9.51| 14.87| 29.79|11.5|\n| Glory| Carnival| 10| 110.0| 29.74| 9.51| 14.87| 36.99|11.6|\n| Holiday| Carnival| 28| 46.052| 14.52| 7.27| 7.26| 31.72| 6.6|\n|Imagination| Carnival| 18| 70.367| 20.52| 8.55| 10.2| 34.29| 9.2|\n|Inspiration| Carnival| 17| 70.367| 20.52| 8.55| 10.2| 34.29| 9.2|\n| Legend| Carnival| 11| 86.0| 21.24| 9.63| 10.62| 40.49| 9.3|\n| Liberty*| Carnival| 8| 110.0| 29.74| 9.51| 14.87| 36.99|11.6|\n| Miracle| Carnival| 9| 88.5| 21.24| 9.63| 10.62| 41.67|10.3|\n| Paradise| Carnival| 15| 70.367| 20.52| 8.55| 10.2| 34.29| 9.2|\n| Pride| Carnival| 12| 88.5| 21.24| 9.63| 11.62| 41.67| 9.3|\n| Sensation| Carnival| 20| 70.367| 20.52| 8.55| 10.2| 34.29| 9.2|\n+-----------+-----------+---+------------------+----------+------+------+-----------------+----+\nonly showing top 20 rows\n\n"
],
[
"training.columns",
"_____no_output_____"
],
[
"training.printSchema()",
"root\n |-- Ship_name: string (nullable = true)\n |-- Cruise_line: string (nullable = true)\n |-- Age: integer (nullable = true)\n |-- Tonnage: double (nullable = true)\n |-- passengers: double (nullable = true)\n |-- length: double (nullable = true)\n |-- cabins: double (nullable = true)\n |-- passenger_density: double (nullable = true)\n |-- crew: double (nullable = true)\n\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c401e344e6c39cce44f2f61097fc553dbb8557 | 7,177 | ipynb | Jupyter Notebook | Regularization and Model Selection.ipynb | reata/MachineLearning | f1082ccbe79d65008ac6bcefe9e184a090eb91bb | [
"MIT"
] | null | null | null | Regularization and Model Selection.ipynb | reata/MachineLearning | f1082ccbe79d65008ac6bcefe9e184a090eb91bb | [
"MIT"
] | null | null | null | Regularization and Model Selection.ipynb | reata/MachineLearning | f1082ccbe79d65008ac6bcefe9e184a090eb91bb | [
"MIT"
] | 2 | 2018-10-08T16:05:27.000Z | 2020-12-14T14:58:23.000Z | 46.00641 | 270 | 0.605963 | [
[
[
"# 正则化和模型选择 Regularization and Model Selection\n\n设想现在对于一个学习问题,需要从一组不同的模型中进行挑选。比如多元回归模型 $h_\\theta(x)=g(\\theta_0+\\theta_1x+\\theta_2x^2+\\cdots+\\theta_kx^k)$,如何自动地确定 $k$ 的取值,从而在偏差和方差之间达到较好的权衡?或者对于局部加权线性回归,如何确定带宽 $\\tau$ 的值,以及对于 $\\ell_1$ 正则化的支持向量机,如何确定参数 $C$ 的值?\n\n为了方便后续的讨论,统一假定有一组有限数量的模型集合 $\\mathcal{M}=\\{M_1,\\cdots,M_d\\}$。(推广到无限数量的集合也非常容易,比如对于局部加权线性模型的带宽 $\\tau$,其取值范围为 $\\mathbb{R}^+$,只需要将 $\\tau$ 离散化,考虑有限的若干个值即可。更一般地,这里讨论的绝大多数算法,都可以看做在模型空间范围内的优化搜索问题)\n\n本节包括以下内容:\n1. 交叉验证 Cross validation\n2. 特征选择 Feature selection\n3. 贝叶斯统计学和正则化 Bayesian statistics and regularization",
"_____no_output_____"
],
[
"### 1. 交叉验证 Cross Validation\n\n设想有训练集 $S$。回顾经验风险最小化,一个直观的模型选择过程如下:\n1. 对于每一个模型 $M_i$,用 $S$ 进行训练,得到相应的假设函数 $h_i$。\n2. 挑选训练误差最小的假设函数。\n\n这个算法的表现可能会很差。考虑多元线性回归,模型的阶越高,它对训练集 $S$ 的拟合情况就越好,从而可以得到越低的训练误差。因此,上面这个方法,总是会挑选出高方差、高阶的多元模型。\n\n下面是 **hold-out 交叉验证**(也称简单交叉验证)的思路:\n1. 随机将 $S$ 分为 $S_{train}$(例如大约70%的数据)和 $S_{cv}$(剩下的30%)。这里,$S_{cv}$ 称作hold-out交叉验证集。\n2. 对于每一个模型 $M_i$,仅使用 $S_{train}$ 进行训练,得到相应的假设函数 $h_i$。\n3. 从 $h_i$ 中挑选对hold-out交叉验证集误差 $\\hat{\\epsilon}_{cv}(h_i)$ 最小的假设函数。\n\n对未用来训练数据的 $S_{cv}$ 计算的交叉验证集误差,是泛化误差的一个更好的估计量。通常,在hold-out交叉验证中会保留四分之一到三分之一的数据,30%是最常见的选择。\n\nhold-out交叉验证还有一个可选的步骤,在上述流程完成后,可以用模型 $M_i$ 针对整个训练集 $S$ 重新训练。(通常这会训练出稍好的模型,不过也有例外,比如学习算法非常容易收到初始条件或初始数据影响的情况,这时 $M_i$ 在 $S_{train}$ 上表现良好,并不一定也会在 $S_{cv}$ 上表现良好,这种情况最好不要执行这个重新训练的过程)\n\nhold-out交叉验证的一个劣势在于,它“浪费”了30%的数据。即便最后使用整个训练集重新训练了模型,看上去模型挑选的过程还是只针对 $0.7m$ 的训练样本,而不是所有 $m$ 个样本,因为我们的测试的模型只使用了 $0.7m$ 的数据。当数据量非常大的时候,这通常没什么问题,但数据量小的时候,可能就需要更好的策略。\n\n**k折交叉验证**,每次训练时,都只保留更少的数据:\n1. 随机将 $S$ 分为 $k$ 个不相交的子集,每个子集包含 $m/k$ 个训练样本。记做 $S_1,\\cdots,S_k$。\n2. 对于每一个模型 $M_i$:对 $j=1,\\cdots,k$,使用 $S_1 \\cup \\cdots \\cup S_{j-1} \\cup S_{j+1} \\cup \\cdots \\cup S_k$(即对训练集中除去 $S_j$ 的部分进行训练),得到假设函数 $h_{ij}$。在 $S_j$ 上测试,得到 $\\hat{\\epsilon}_{S_{j}}(h_{ij})$。最终 $M_i$ 的泛化误差估计量表示为 $\\hat{\\epsilon}_{S_{j}}(h_{ij})$ 的平均值。\n3. 挑选预计泛化误差最小的模型 $M_i$,用整个训练集 $S$ 重新训练,得到最终的假设函数。\n\n最常见的选择是令 $k=10$。由于每次训练时保留的数据量更小,而每个模型都需要训练 $k$ 次,k折交叉验证的计算开销会比hold-out交叉验证更大。\n\n尽管 $k=10$ 是最常用的,但当数据量非常小的时候,有时也会采用 $k=m$ 来保证每次训练尽可能保留最少的数据用于验证。这种特殊情况的k折验证,也叫作**留一交叉验证 leave-one-out cross validation**。\n\n最后,尽管这里介绍交叉验证用于模型选择,实际上,交叉验证也可以用来做单个模型的效果评估。",
"_____no_output_____"
],
[
"### 2. 特征选择 Feature Selection\n\n模型选择中的一个特殊应用是特征选择。设想一个监督学习问题拥有非常大数量的特征(甚至可能 $n \\gg m$),但实际上能只有一小部分特征与学习任务“有关”。即便使用了最简单的线性分类器,假设函数类的VC维依然是 $O(n)$,除非训练集足够大,否则就会有潜在的过拟合风险。\n\n在上面的设定下,可以使用一个特征选择算法来减少特征。给定 $n$ 个特征,最多有 $2^n$ 种特征组合,所以特征选择可以转换成一个 $2^n$ 种模型的模型选择问题。如果 $n$ 很大,枚举所有 $2^n$ 种模型的计算开销将会非常大。所以,通常会采用一些探索性的搜索策略,来找到一个不错的特征子集。下面这个策略,叫做**前向搜索 forward search**:\n1. 初始化 $\\mathcal{F}=\\emptyset$\n2. 重复以下两个步骤:(a) 对于 $i=1,\\cdots,n$,如果 $i \\notin \\mathcal{F}$,令 $\\mathcal{F}_i=\\mathcal{F} \\cup \\{i\\}$,然后使用交叉验证方法来测试特征集 $\\mathcal{F}_i$。(也即只使用 $\\mathcal{F}_i$ 中的特征来训练模型,并预估泛化误差)。(b) 令 $\\mathcal{F}$ 等于步骤(a)中最好的特征子集。\n3. 选择整个搜索评估过程中表现最好的特征子集。\n\n最外层的循环,既可以在 $\\mathcal{F}={1,\\cdots,n}$ 时终止,也可以中止于 $|\\mathcal{F}|$ 超过某个预设的阈值。(比如,预先评估了模型最多只使用 $x$ 个特征)\n\n上述这个算法属于**模型特征选择包装 wrapper model feature selection**的一个实例,因为它是一个包装在学习算法之外的步骤,通过一定策略重复调用学习算法来评估其表现。除去前项选择之外,还有一些别的搜索策略。比如**后向选择 backward search**:从 $\\mathcal{F}=\\{1,\\cdots,n\\}$ 开始,每次删除一个特征直到 $\\mathcal{F}=\\emptyset$。\n\n特征选择包装算法通常表现不错,但是计算开销很大。完成整个搜索过程需要 $O(n^2)$ 次对学习算法的调用。\n\n**过滤特征选择 filter feature selection** 则是一个计算开销小的探索性特征选择策略。这个策略的核心,是计算出某种能表示特征 $x_i$ 对于标签 $y$ 贡献的信息量的评分 $S(i)$。这样,只要再选择其中得分最大的 $k$ 个特征即可。\n\n可以选择皮尔森相关性的绝对值,作为评分标准。但在实际中,最长使用(尤其对于离散特征)的方法叫做**互信息mutual information**,定义如下:\n$$ MI(x_i,y) = \\sum_{x_i \\in \\{0,1\\}}\\sum_{y \\in \\{0,1\\}}p(x_i,y)log\\frac{p(x_i,y)}{p(x_i)p(y)} $$\n(这里的等式假定 $x_i,y$ 都是二元值。更广泛地定义会根据变量的定义域来计算)概率 $p(x_i,y),p(x_i),p(y)$ 都可以通过训练集的经验分布来进行预测。\n\n注意到,互信息也可以表示为**KL散度 Kullback-Leibler divergence**:\n$$ MI(x_i,y)=KL(p(x_i,y)||p(x_i)p(y)) $$\nKL散度度量的是 $p(x_i,y)$ 与 $p(x_i)p(y))$ 之间分布的差异程度。如果 $x_i$ 和 $y$ 是独立随机变量,那么 $p(x_i,y)=p(x_i)p(y))$,这时二者的KL散度为零。这和我们的直觉相符,如果 $x_i$ 和 $y$ 相互独立,那么 $x_i$ 对于 $y$ 就没有贡献任何信息量,$S(i)$ 就应当非常小。\n\n最后一个细节:当已经计算好 $S(i)$ 将特征根据重要性进行排序了之后,如何决定使用多少个特征呢?标准的做法是,使用交叉验证来确定 $k$ 值。",
"_____no_output_____"
],
[
"### 3. 贝叶斯统计学和正则化 Bayesian statistics and regularization\n\n正则化是应对过拟合的一个有效方法。之前,在参数拟合的过程中多使用最大似然估计法,根据下面这个公式来选择参数\n$$ \\theta_{ML}=arg \\max_{\\theta}\\prod_{i=1}^m p(y^{(i)}|x^{(i)};\\theta) $$\n在这整个过程中,$\\theta$ 都被视作一个确定但未知的常数,这是**频率学派 frequentist statistics**的视角。在频率学派看来,参数 $\\theta$ 并不是随机而仅仅是未知的,我们需要通过某种统计推断(比如最大似然估计法)的手段来估计这个参数。\n\n与此相对,**贝叶斯学派 Bayesian statistics**将 $\\theta$ 看做一个值未知的随机变量。从而,我们可以假设一个关于 $\\theta$ 的**先验概率 prior distribution** $p(\\theta)$。而给定训练集 $S=\\{(x^{(i)}, y^{(i)})\\}_{i=1}^m$,如果这时要对一个新的输入值 $x$ 做预测,我们需要先计算 $\\theta$ 的**后验概率 posterior distribution**\n$$\n\\begin{split}\np(\\theta|S) &= \\frac{p(S|\\theta)p(\\theta)}{p(S)} \\\\\n&= \\frac{\\prod_{i=1}^m p(y^{(i)}|x^{(i)},\\theta)p(\\theta)}{\\int_{\\theta}(\\prod_{i=1}^m p(y^{(i)}|x^{(i)},\\theta)p(\\theta))d\\theta}\n\\end{split}\n$$\n注意到,这时 $\\theta$ 已经是一个随机变量,是可以 $p(y^{(i)}|x^{(i)},\\theta)$ 的形式作为条件概率的条件出现的(而不是之前的 $p(y^{(i)}|x^{(i)};\\theta)$)。\n\n之后,对于新的数据 $x$ 进行预测时,计算其标签的后验概率\n$$ p(y|x,S)=\\int_{\\theta}p(y|x,\\theta)p(\\theta|S)d\\theta $$\n\n因此,如果目标是预测给定 $x$ 时 $y$ 的期望值,那么\n$$ E[y|x,S]=\\int_{y}yp(y|x,S)dy $$\n\n上面的过程全部都使用了贝叶斯统计学的思路,这样计算开销会无比巨大。因而,在实际中,会使用一些近似的方法来估计 $\\theta$ 的后验概率以简化计算。最常见的近似,是使用点估计,**最大后验概率估计 maximum a posteriori **\n$$ \\theta_{MAP}=arg \\max_\\theta \\prod_{i=1}^m p(y^{(i)}|x^{(i)},\\theta)p(\\theta) $$\n注意到这个公示仅仅比最大似然估计法在因子中增加了一项 $p(\\theta)$。\n\n而实际中,对先验概率 $p(\\theta)$ 的假设,通常是 $\\theta \\sim \\mathcal{N}(0,\\tau^2I)$。使用这样的先验概率假设,拟合出的参数 $\\theta_{MAP}$ 的模会比最大似然估计法小很多。这使得贝叶斯最大后验概率估计更不易受到过拟合的影响。",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0c4045692ee3488a4b261dbc1d20103bc99c1cf | 64,578 | ipynb | Jupyter Notebook | .ipynb_checkpoints/hunchback-checkpoint.ipynb | Barry0121/disney_movies_scripts | 418a38ab4eaab79161de11d1b80a8a62052ed47e | [
"MIT"
] | null | null | null | .ipynb_checkpoints/hunchback-checkpoint.ipynb | Barry0121/disney_movies_scripts | 418a38ab4eaab79161de11d1b80a8a62052ed47e | [
"MIT"
] | null | null | null | .ipynb_checkpoints/hunchback-checkpoint.ipynb | Barry0121/disney_movies_scripts | 418a38ab4eaab79161de11d1b80a8a62052ed47e | [
"MIT"
] | null | null | null | 64.320717 | 1,786 | 0.481387 | [
[
[
"import numpy as np \nimport matplotlib.pyplot as plt \nimport pandas as pd \nimport os",
"_____no_output_____"
],
[
"# fp = os.path.join('..\\scripts', 'the_hunchback_of_notre_dame.txt ')\n\n# # os.listdir('scripts')\n# chars = np.array([])\n# words = np.array([])\n# scene_setup = np.array([])\n# new_char = True\n# with open(fp, 'r', encoding='utf-8') as infile:\n# # print(infile.readlines())\n# print(infile.read())\n# for line in infile:\n# # print(line)\n# if line == '\\n':\n# continue\n# if not new_char:\n\n# words[-1] += line\n# if ':' in line:\n \n# # print([line.split(':')[0]])\n# if ' ' in line:\n# words[-1] += line\n# continue\n# words = np.append(words,[line.split(':')[1]])\n# chars = np.append(chars,[line.split(':')[0]])\n\n# new_char = False\n# # print(line)\n# # print(new_char)\n# # print('\\t\\thello')\n# chars.shape\n# words.shape\n# words[-4]",
"_____no_output_____"
],
[
"# fp = os.path.join('..\\scripts', 'the_hunchback_of_notre_dame.txt ')\n\n# # os.listdir('scripts')\n# chars = np.array([])\n# words = np.array([])\n# scene_setup = np.array([])\n# scene = False\n# new_char = True\n\n# # def find_setup(line):\n# # if '(' in line:\n# # s = line\n# # scene_setup += s[s.find(\"(\")+1:s.find(\")\")]\n# # if s.find(\")\") == -1:\n# # scene = True\n# # continue\n# # if scene:\n# # if ')' in line:\n# # scene_setup[0] += s[s.find(\"(\")+1:s.find(\")\")]\n# with open(fp, 'r', encoding='utf-8') as infile:\n# # print(infile.readlines())\n# for line in infile:\n# # print([line])\n# # if line == '\\n':\n# # continue\n# s = line\n\n# if '(' in line: \n# # print(scene_setup)\n# # print( [s[s.find(\"(\")+1:s.find(\")\")]])\n \n# if s.find(\")\") == -1:\n# scene_setup = np.append(scene_setup,[s[s.find(\"(\"):s.find(\")\")]])\n# scene = True\n# continue\n# else:\n# scene_setup = np.append(scene_setup,[s[s.find(\"(\"):s.find(\")\")+1]])\n \n# if scene:\n# # print(line)\n# # print(scene_setup[-1])\n# # print(line)\n# if ')' in line:\n# # print(True)\n# print([scene_setup[-1] +s[:s.find(\")\")+1]])\n# scene_setup[-1] += s[:s.find(\")\")+1]\n# # print(s.find(\")\")+1)\n# # print(s[:s.find(\")\")+1])\n# # print(scene_setup[-1] +s[:s.find(\")\")+1])\n# scene = False\n# else:\n# scene_setup[-1] += s[:]\n \n# if not new_char:\n# words[-1] += line\n# if ':' in line:\n# if ' ' in line:\n# words[-1] += line\n \n# continue\n# words = np.append(words,[line.split(':')[1]])\n# chars = np.append(chars,[line.split(':')[0]])\n\n# new_char = False\n# # print(line)\n# # print(new_char)\n# # print('\\t\\thello')\n# chars.shape\n# words.shape\n# words[-4]\n# scene_setup",
"_____no_output_____"
],
[
"print('\\t\\thello')\n' ' in' of witchcraft. The sentence'",
"\t\thello\n"
],
[
"import re\ns = \"this is a ki_te message\"\ns[s.find(\"(\")+1:s.find(\")\")]\ns.find(\")\")",
"_____no_output_____"
],
[
"### fp = os.path.join('..\\scripts', 'the_hunchback_of_notre_dame.txt')\n\n# os.listdir('scripts')\nchars = []\nwords = []\nscene_setup = []\nscene = False\nnew_char = True\n\ndef get_scene_setup(string):\n '''\n get scence setup\n '''\n s = string\n scene_setup = s[s.find(\"(\"):s.find(\")\")+1]\n return scene_setup\ndef remove_scene_setup(string):\n '''\n get scence setup\n '''\n s = string\n scene_setup = s[s.find(\"(\"):s.find(\")\")+1]\n s = string.replace(scene_setup, ' ')\n return scene_setup\n# if '(' in line: \n# if s.find(\")\") == -1:\n# scene_setup += [s[s.find(\"(\"):]]\n# scene = True\n# continue\n# else:\n# scene_setup += [s[s.find(\"(\"):s.find(\")\")+1]] \n# if scene:\n# if ')' in line:\n# scene_setup[-1] += s[:s.find(\")\")+1]\n# scene = False\n# else:\n# scene_setup[-1] += s[:]\nline_nums = []\nwith open(fp, 'r', encoding='utf-8') as infile:\n# print(infile.readlines())\n for num_l, line in enumerate(infile):\n num_l += 1\n# print(line)\n\n s = line\n# get scene setup\n\n\n if not new_char:\n if ':' in line and ' ' not in line:\n pass\n else:\n words[-1] += line\n if ':' in line:\n if ' ' in line:\n words[-1] += line\n continue\n line_nums += [num_l]\n words += [line.split(':')[1]]\n chars += [line.split(':')[0]]\n\n new_char = False\n\ndf = pd.DataFrame()\n# np.array(chars).shape\n# np.array(words).shape\nchars_words = np.array(list(zip(chars, words,line_nums)))\ndraft = pd.DataFrame(chars_words,columns = ['chars','lines','line_num'])\n# 2) put lines into a df & store it\n# 3)\ndraft['scene_setup'] = draft.lines.apply(get_scene_setup)\ndraft",
"_____no_output_____"
],
[
"# draft['mod_lines'] = draft.lines.str.replace(draft.scene_setup,'')\n'ádasda'.replace('a','')",
"_____no_output_____"
],
[
"path = \"..\\\\scripts\\\\sleeping_beauty.txt\"\nfp = os.path.join('..\\scripts', 'the_hunchback_of_notre_dame.txt')\n# with open(fp, 'r', encoding='utf-8') as infile:\n# print(infile.readlines())\nfile = open(fp, 'r')\nscript = file.readlines()\nfile.close()\nstart, end = 25, 1156\n\n\nline_count = 0 # number of lines total \ncharacters = set() # all the characters\nlines = [] # character with their lines, index are the line numbers\n\n#TODO: get rid of '[...]', stage setup instructions, from the lines \ndef remove_setup(string): \n stack = False\n new_str = \"\"\n for s in string:\n if s=='(':\n stack = True\n elif s==')':\n stack = False\n else: \n if not stack:\n new_str += s\n return new_str.strip()\n\n\nfor i in range(start, end+1):\n holder = script[i] # line that is being read\n if ':' in holder:\n character = holder[:-1]\n characters.add(character) # record for unique characters\n lines.append([character, \"\"])\n line_count += 1\n else:\n if len(holder) != 0 and line_count != 0:\n lines[line_count-1][1] += holder\nlines\n# lines = np.array(lines)\n# d = {'Char':lines[:,0]\n# ,'line': lines[:,1]}\n# pd.DataFrame(d)\nlines",
"_____no_output_____"
],
[
"m = np.array(['asdsad'])\nm[-1] += 'asdsadasd'\nm",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c4066aec49c2890a0b475b1332d280400447c6 | 5,906 | ipynb | Jupyter Notebook | codes/labs_lecture03/lab01_linear_module/linear_module_demo.ipynb | xb-trainings/IPAM_2018 | 8927cb6be6dc112b64cb99b7bbcad4ae8c9110dd | [
"MIT"
] | null | null | null | codes/labs_lecture03/lab01_linear_module/linear_module_demo.ipynb | xb-trainings/IPAM_2018 | 8927cb6be6dc112b64cb99b7bbcad4ae8c9110dd | [
"MIT"
] | null | null | null | codes/labs_lecture03/lab01_linear_module/linear_module_demo.ipynb | xb-trainings/IPAM_2018 | 8927cb6be6dc112b64cb99b7bbcad4ae8c9110dd | [
"MIT"
] | null | null | null | 20.506944 | 147 | 0.469015 | [
[
[
"# Lab 01 : Linear module -- demo",
"_____no_output_____"
]
],
[
[
"import torch\nimport torch.nn as nn",
"_____no_output_____"
]
],
[
[
"### Make a _Linear Module_ that takes input of size 5 and return output of size 3",
"_____no_output_____"
]
],
[
[
"mod = nn.Linear(5,3,bias=True)\nprint(mod)",
"Linear(in_features=5, out_features=3, bias=True)\ntorch.Size([3, 5])\ntorch.Size([3])\n"
]
],
[
[
"### Let's make a random tensor of size 5:",
"_____no_output_____"
]
],
[
[
"x=torch.rand(5)\nprint(x)\nprint(x.size())",
"tensor([0.5148, 0.6442, 0.5563, 0.4040, 0.9193])\ntorch.Size([5])\n"
]
],
[
[
"### Feed it to the module:",
"_____no_output_____"
]
],
[
[
"y=mod(x)\nprint(y)",
"tensor([ 0.3300, -0.1137, 0.6271], grad_fn=<ThAddBackward>)\n"
]
],
[
[
"### The output y is computed according to the formula:\n$$\n\\begin{bmatrix}\ny_1\\\\ y_2 \\\\y_3 \n\\end{bmatrix} =\n\\begin{bmatrix}\nw_{11} & w_{12} & w_{13}& w_{14}& w_{15} \\\\\nw_{21} & w_{22} & w_{23}& w_{24}& w_{25} \\\\\nw_{31} & w_{32} & w_{33}& w_{34}& w_{35} \\\\\n\\end{bmatrix}\n\\begin{bmatrix}\nx_1\\\\ x_2 \\\\x_3 \\\\ x_4 \\\\x_5\n\\end{bmatrix}\n+\n\\begin{bmatrix}\nb_1\\\\ b_2 \\\\b_3 \n\\end{bmatrix}\n$$\n### were the $w_{ij}$'s are the weight parameters and the $b_i$'s are the bias parameters. These internal parameters can be access as follow:",
"_____no_output_____"
]
],
[
[
"print(mod.weight)\nprint(mod.weight.size())",
"Parameter containing:\ntensor([[ 0.2652, -0.4428, 0.1893, 0.3214, -0.0462],\n [-0.3800, 0.4037, 0.0042, 0.4156, -0.4236],\n [ 0.2434, 0.2813, 0.1570, -0.4028, 0.0869]], requires_grad=True)\ntorch.Size([3, 5])\n"
],
[
"print(mod.bias)\nprint(mod.bias.size())",
"Parameter containing:\ntensor([0.2860, 0.0411, 0.3160], requires_grad=True)\ntorch.Size([3])\n"
]
],
[
[
"### If we want we can change the internal parameters of the module:",
"_____no_output_____"
]
],
[
[
"mod.weight[0,0]=0\nmod.weight[0,1]=1\nmod.weight[0,2]=2\nprint(mod.weight)",
"Parameter containing:\ntensor([[ 0.0000, 1.0000, 2.0000, 0.3214, -0.0462],\n [-0.3800, 0.4037, 0.0042, 0.4156, -0.4236],\n [ 0.2434, 0.2813, 0.1570, -0.4028, 0.0869]], grad_fn=<CopySlices>)\n"
]
],
[
[
"### We can also make a Linear module without bias:",
"_____no_output_____"
]
],
[
[
"mod2 = nn.Linear(5,3,bias=False)\nprint(mod2)",
"Linear(in_features=5, out_features=3, bias=False)\n"
],
[
"print(mod2.weight)",
"Parameter containing:\ntensor([[ 0.1703, 0.1601, 0.3649, -0.1387, -0.3961],\n [ 0.4339, 0.2803, 0.0350, -0.3152, 0.3601],\n [-0.0434, 0.4186, 0.1819, 0.0771, 0.1898]], requires_grad=True)\n"
],
[
"print(mod2.bias)",
"None\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0c41535b92b3392eb2c4d88c8739348a2e779bf | 15,210 | ipynb | Jupyter Notebook | exercises/classification/5_data_augmentation.ipynb | constantinpape/dl-teaching-resources | 3158d9b17a8bfcb58012d0eb77b45a767d3185de | [
"MIT"
] | 1 | 2022-01-31T15:10:50.000Z | 2022-01-31T15:10:50.000Z | exercises/classification/5_data_augmentation.ipynb | constantinpape/dl-teaching-resources | 3158d9b17a8bfcb58012d0eb77b45a767d3185de | [
"MIT"
] | 1 | 2021-12-18T17:09:51.000Z | 2021-12-18T17:09:51.000Z | exercises/classification/5_data_augmentation.ipynb | constantinpape/dl-teaching-resources | 3158d9b17a8bfcb58012d0eb77b45a767d3185de | [
"MIT"
] | 1 | 2021-11-19T09:30:05.000Z | 2021-11-19T09:30:05.000Z | 29.765166 | 280 | 0.565155 | [
[
[
"<a href=\"https://colab.research.google.com/github/constantinpape/dl-teaching-resources/blob/main/exercises/classification/5_data_augmentation.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Data Augmentation on CIFAR10\n\nIn this exercise we will use data augmentation to increase the available training data and thus improve the network training performance. We will use the same network architecture as in the previous exercise.",
"_____no_output_____"
],
[
"## Preparation",
"_____no_output_____"
]
],
[
[
"# load tensorboard extension\n%load_ext tensorboard",
"_____no_output_____"
],
[
"# import torch and other libraries\nimport os\nimport numpy as np\nimport sklearn.metrics as metrics\nimport matplotlib.pyplot as plt\n\nimport torch\nimport torch.nn as nn\nfrom torch.utils.data import DataLoader\nfrom torch.optim import Adam",
"_____no_output_____"
],
[
"!pip install cifar2png",
"_____no_output_____"
],
[
"# check if we have gpu support\n# colab offers free gpus, however they are not activated by default.\n# to activate the gpu, go to 'Runtime->Change runtime type'. \n# Then select 'GPU' in 'Hardware accelerator' and click 'Save'\nhave_gpu = torch.cuda.is_available()\n# we need to define the device for torch, yadda yadda\nif have_gpu:\n print(\"GPU is available\")\n device = torch.device('cuda')\nelse:\n print(\"GPU is not available, training will run on the CPU\")\n device = torch.device('cpu')",
"_____no_output_____"
],
[
"# run this in google colab to get the utils.py file\n!wget https://raw.githubusercontent.com/constantinpape/training-deep-learning-models-for-vison/master/day1/utils.py ",
"_____no_output_____"
],
[
"# we will reuse the training function, validation function and\n# data preparation from the previous notebook\nimport utils",
"_____no_output_____"
],
[
"cifar_dir = './cifar10'\n!cifar2png cifar10 cifar10",
"_____no_output_____"
],
[
"categories = os.listdir('./cifar10/train')\ncategories.sort()",
"_____no_output_____"
],
[
"images, labels = utils.load_cifar(os.path.join(cifar_dir, 'train'))\n(train_images, train_labels,\n val_images, val_labels) = utils.make_cifar_train_val_split(images, labels)",
"_____no_output_____"
]
],
[
[
"## Data Augmentation\n\nThe goal of data augmentation is to increase the amount of training data by transforming the input images in a way that they still resemble realistic images. Popular transformations used in data augmentation include rotations, image flips, color jitter or additive noise.\nHere, we will start with two transformations:\n- random flips along the vertical centerline\n- random color jitters",
"_____no_output_____"
]
],
[
[
"# define random augmentations\nimport skimage.color as color\n\ndef random_flip(image, target, probability=.5):\n \"\"\" Randomly mirror the image across the vertical axis.\n \"\"\"\n if np.random.rand() < probability:\n image = np.array([np.fliplr(im) for im in image])\n return image, target\n\n\ndef random_color_jitter(image, target, probability=.5):\n \"\"\" Randomly jitter the saturation, hue and brightness of the image.\n \"\"\"\n if np.random.rand() > probability:\n # skimage expects WHC instead of CHW\n image = image.transpose((1, 2, 0))\n # transform image to hsv color space to apply jitter\n image = color.rgb2hsv(image)\n # compute jitter factors in range 0.66 - 1.5 \n jitter_factors = 1.5 * np.random.rand(3)\n jitter_factors = np.clip(jitter_factors, 0.66, 1.5)\n # apply the jitter factors, making sure we stay in correct value range\n image *= jitter_factors\n image = np.clip(image, 0, 1)\n # transform back to rgb and CHW\n image = color.hsv2rgb(image)\n image = image.transpose((2, 0, 1))\n return image, target",
"_____no_output_____"
],
[
"# create training dataset with augmentations\nfrom functools import partial\ntrain_trafos = [\n utils.to_channel_first,\n utils.normalize,\n random_color_jitter,\n random_flip,\n utils.to_tensor\n]\ntrain_trafos = partial(utils.compose, transforms=train_trafos)\n\ntrain_dataset = utils.DatasetWithTransform(train_images, train_labels,\n transform=train_trafos)\n\n# we don't use data augmentations for the validation set\nval_dataset = utils.DatasetWithTransform(val_images, val_labels,\n transform=utils.get_default_cifar_transform())",
"_____no_output_____"
],
[
"# sample augmentations\ndef show_image(ax, image):\n # need to go back to numpy array and WHC axis order\n image = image.numpy().transpose((1, 2, 0))\n ax.imshow(image)\n\nn_samples = 8\nimage_id = 0\nfig, ax = plt.subplots(1, n_samples, figsize=(18, 4))\nfor sample in range(n_samples):\n image, _ = train_dataset[0]\n show_image(ax[sample], image)",
"_____no_output_____"
],
[
"# we reuse the model from the previous exercise\n# if you want you can also use a different CNN architecture that\n# you have designed in the tasks part of that exercise\nmodel = utils.SimpleCNN(10)\nmodel = model.to(device)",
"_____no_output_____"
],
[
"# instantiate loaders and optimizer and start tensorboard\ntrain_loader = DataLoader(train_dataset, batch_size=4, shuffle=True)\nval_loader = DataLoader(val_dataset, batch_size=25)\noptimizer = Adam(model.parameters(), lr=1.e-3)\n%tensorboard --logdir runs",
"_____no_output_____"
],
[
"# we have moved all the boilerplate for the full training procedure to utils now\nn_epochs = 10\nutils.run_cifar_training(model, optimizer,\n train_loader, val_loader,\n device=device, name='da1', \n n_epochs=n_epochs)",
"_____no_output_____"
],
[
"# evaluate the model on test data\ntest_dataset = utils.make_cifar_test_dataset(cifar_dir)\ntest_loader = DataLoader(test_dataset, batch_size=25)\npredictions, labels = utils.validate(model, test_loader, nn.NLLLoss(),\n device, step=0, tb_logger=None)",
"_____no_output_____"
],
[
"print(\"Test accuracy:\")\naccuracy = metrics.accuracy_score(labels, predictions)\nprint(accuracy)\n\nfig, ax = plt.subplots(1, figsize=(8, 8))\nutils.make_confusion_matrix(labels, predictions, categories, ax)",
"_____no_output_____"
]
],
[
[
"## Normalization layers\n\nIn addition to convolutional layers and pooling layers, another important part of neural networks are normalization layers.\n\nThese layers keep their input normalized using a learned normalization. The first type of normalization introduced has been [BatchNorm](https://arxiv.org/abs/1502.03167), which we will now add to the CNN architecture from the previous exercise.",
"_____no_output_____"
]
],
[
[
"import torch.nn.functional as F\n\nclass CNNBatchNorm(nn.Module):\n def __init__(self, n_classes):\n super().__init__()\n self.n_classes = n_classes\n\n # the convolutions\n self.conv1 = nn.Conv2d(in_channels=3, out_channels=12, kernel_size=5)\n self.conv2 = nn.Conv2d(in_channels=12, out_channels=24, kernel_size=3)\n # the pooling layer\n self.pool = nn.MaxPool2d(2, 2)\n # the normalization layers\n self.bn1 = nn.BatchNorm2d(12)\n self.bn2 = nn.BatchNorm2d(24)\n\n # the fully connected part of the network\n # after applying the convolutions and poolings, the tensor\n # has the shape 24 x 6 x 6, see below\n self.fc = nn.Sequential(\n nn.Linear(24 * 6 * 6, 120),\n nn.ReLU(),\n nn.Linear(120, 60),\n nn.ReLU(),\n nn.Linear(60, self.n_classes)\n )\n self.activation = nn.LogSoftmax(dim=1)\n\n def apply_convs(self, x):\n # input image has shape 3 x 32 x 32\n x = self.pool(F.relu(self.bn1(self.conv1(x))))\n # shape after conv: 12 x 28 x 28\n # shape after pooling: 12 x 14 X 14\n x = self.pool(F.relu(self.bn2(self.conv2(x))))\n # shape after conv: 24 x 12 x 12\n # shape after pooling: 24 x 6 x 6\n return x\n \n def forward(self, x):\n x = self.apply_convs(x)\n x = x.view(-1, 24 * 6 * 6)\n x = self.fc(x)\n x = self.activation(x)\n return x",
"_____no_output_____"
],
[
"# instantiate model and optimizer\nmodel = CNNBatchNorm(10)\nmodel = model.to(device)\noptimizer = Adam(model.parameters(), lr=1.e-3)",
"_____no_output_____"
],
[
"n_epochs = 10\nutils.run_cifar_training(model, optimizer,\n train_loader, val_loader,\n device=device, name='batch-norm', \n n_epochs=n_epochs)",
"_____no_output_____"
],
[
"model = utils.load_checkpoin(\"best_checkpoint_batch-norm.tar\", model, optimizer)[0]",
"_____no_output_____"
],
[
"predictions, labels = utils.validate(model, test_loader, nn.NLLLoss(),\n device, step=0, tb_logger=None)\n\nprint(\"Test accuracy:\")\naccuracy = metrics.accuracy_score(labels, predictions)\nprint(accuracy)\n\nfig, ax = plt.subplots(1, figsize=(8, 8))\nutils.make_confusion_matrix(labels, predictions, categories, ax)",
"_____no_output_____"
]
],
[
[
"## Tasks and Questions\n\nTasks:\n- Implement one or two additional augmentations and train the model again using these. You can use [the torchvision transformations](https://pytorch.org/docs/stable/torchvision/transforms.html) for inspiration.\n\nQuestions:\n- Compare the model results in this exercise.\n- Can you think of any transformations that make use of symmetries/invariances not present here but present in other kinds of images (e.g. biomedical images)?\n\nAdvanced:\n- Check out the other [normalization layers available in pytorch](https://pytorch.org/docs/stable/nn.html#normalization-layers). Which layers could be beneficial to BatchNorm here? Try training with them and see if this improves performance further.",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0c41b76e1f32ce86858680be8f4fdd4140ad6f6 | 15,980 | ipynb | Jupyter Notebook | 10_pipeline/kubeflow/wip/07_01_KFServing.ipynb | MarcusFra/workshop | 83f16d41f5e10f9c23242066f77a14bb61ac78d7 | [
"Apache-2.0"
] | 2,327 | 2020-03-01T09:47:34.000Z | 2021-11-25T12:38:42.000Z | 10_pipeline/kubeflow/wip/07_01_KFServing.ipynb | MarcusFra/workshop | 83f16d41f5e10f9c23242066f77a14bb61ac78d7 | [
"Apache-2.0"
] | 209 | 2020-03-01T17:14:12.000Z | 2021-11-08T20:35:42.000Z | 10_pipeline/kubeflow/wip/07_01_KFServing.ipynb | MarcusFra/workshop | 83f16d41f5e10f9c23242066f77a14bb61ac78d7 | [
"Apache-2.0"
] | 686 | 2020-03-03T17:24:51.000Z | 2021-11-25T23:39:12.000Z | 28.183422 | 233 | 0.559512 | [
[
[
"# KFServing Sample \n\nIn this notebook, we provide two samples for demonstrating KFServing SDK and YAML versions.\n\n### Setup\n1. Your ~/.kube/config should point to a cluster with [KFServing installed](https://github.com/kubeflow/kfserving/blob/master/docs/DEVELOPER_GUIDE.md#deploy-kfserving).\n2. Your cluster's Istio Ingress gateway must be network accessible, you can do: \n `kubectl port-forward svc/istio-ingressgateway -n istio-system 8080:80`. \n\n## 1. KFServing SDK sample\n\nBelow is a sample for KFServing SDK. \n\nIt shows how to use KFServing SDK to create, get, rollout_canary, promote and delete InferenceService.",
"_____no_output_____"
],
[
"### Prerequisites",
"_____no_output_____"
]
],
[
[
"!pip install kfserving kubernetes --user",
"_____no_output_____"
],
[
"from kubernetes import client\n\nfrom kfserving import KFServingClient\nfrom kfserving import constants\nfrom kfserving import utils\nfrom kfserving import V1alpha2EndpointSpec\nfrom kfserving import V1alpha2PredictorSpec\nfrom kfserving import V1alpha2TensorflowSpec\nfrom kfserving import V1alpha2InferenceServiceSpec\nfrom kfserving import V1alpha2InferenceService\nfrom kubernetes.client import V1ResourceRequirements",
"_____no_output_____"
]
],
[
[
"Define namespace where InferenceService needs to be deployed to. If not specified, below function defines namespace to the current one where SDK is running in the cluster, otherwise it will deploy to default namespace.",
"_____no_output_____"
]
],
[
[
"namespace = utils.get_default_target_namespace()",
"_____no_output_____"
]
],
[
[
"### Label namespace so you can run inference tasks in it",
"_____no_output_____"
]
],
[
[
"!kubectl label namespace $namespace serving.kubeflow.org/inferenceservice=enabled",
"_____no_output_____"
]
],
[
[
"### Define InferenceService\nFirstly define default endpoint spec, and then define the inferenceservice basic on the endpoint spec.",
"_____no_output_____"
]
],
[
[
"api_version = constants.KFSERVING_GROUP + '/' + constants.KFSERVING_VERSION\ndefault_endpoint_spec = V1alpha2EndpointSpec(\n predictor=V1alpha2PredictorSpec(\n tensorflow=V1alpha2TensorflowSpec(\n storage_uri='gs://kfserving-samples/models/tensorflow/flowers',\n resources=V1ResourceRequirements(\n requests={'cpu':'100m','memory':'1Gi'},\n limits={'cpu':'100m', 'memory':'1Gi'}\n )\n )\n )\n )\n \nisvc = V1alpha2InferenceService(\n api_version=api_version,\n kind=constants.KFSERVING_KIND,\n metadata=client.V1ObjectMeta(name='flower-sample', namespace=namespace),\n spec=V1alpha2InferenceServiceSpec(default=default_endpoint_spec)\n )",
"_____no_output_____"
]
],
[
[
"### Create InferenceService\nCall KFServingClient to create InferenceService.",
"_____no_output_____"
]
],
[
[
"KFServing = KFServingClient()\nKFServing.create(isvc)",
"_____no_output_____"
]
],
[
[
"### Check the InferenceService",
"_____no_output_____"
]
],
[
[
"KFServing.get('flower-sample', namespace=namespace, watch=True, timeout_seconds=120)",
"_____no_output_____"
]
],
[
[
"### Invoke Endpoint\n\nIf you want to invoke endpoint by yourself, you can copy and paste below code block and execute in your local environment. Remember you should have a `kfserving-flowers-input.json` file in the same directory when you execute. ",
"_____no_output_____"
]
],
[
[
"%%bash\n\nMODEL_NAME=flower-sample\nINPUT_PATH=@./kfserving-flowers-input.json\nINGRESS_GATEWAY=istio-ingressgateway\nSERVICE_HOSTNAME=$(kubectl get inferenceservice ${MODEL_NAME} -n $namespace -o jsonpath='{.status.url}' | cut -d \"/\" -f 3)\n\ncurl -v -H \"Host: ${SERVICE_HOSTNAME}\" http://localhost:8080/v1/models/$MODEL_NAME:predict -d $INPUT_PATH",
"_____no_output_____"
]
],
[
[
"Expected Output\n```\n* Trying 34.83.190.188...\n* TCP_NODELAY set\n* Connected to 34.83.190.188 (34.83.190.188) port 80 (#0)\n> POST /v1/models/flowers-sample:predict HTTP/1.1\n> Host: flowers-sample.default.svc.cluster.local\n> User-Agent: curl/7.60.0\n> Accept: */*\n> Content-Length: 16201\n> Content-Type: application/x-www-form-urlencoded\n> Expect: 100-continue\n> \n< HTTP/1.1 100 Continue\n* We are completely uploaded and fine\n< HTTP/1.1 200 OK\n< content-length: 204\n< content-type: application/json\n< date: Fri, 10 May 2019 23:22:04 GMT\n< server: envoy\n< x-envoy-upstream-service-time: 19162\n< \n{\n \"predictions\": [\n {\n \"scores\": [0.999115, 9.20988e-05, 0.000136786, 0.000337257, 0.000300533, 1.84814e-05],\n \"prediction\": 0,\n \"key\": \" 1\"\n }\n ]\n* Connection #0 to host 34.83.190.188 left intact\n}%\n```",
"_____no_output_____"
],
[
"### Add Canary to InferenceService\nFirstly define canary endpoint spec, and then rollout 10% traffic to the canary version, watch the rollout process.",
"_____no_output_____"
]
],
[
[
"canary_endpoint_spec = V1alpha2EndpointSpec(\n predictor=V1alpha2PredictorSpec(\n tensorflow=V1alpha2TensorflowSpec(\n storage_uri='gs://kfserving-samples/models/tensorflow/flowers-2',\n resources=V1ResourceRequirements(\n requests={'cpu':'100m','memory':'1Gi'},\n limits={'cpu':'100m', 'memory':'1Gi'}\n )\n )\n )\n )\n\nKFServing.rollout_canary('flower-sample', canary=canary_endpoint_spec, percent=10,\n namespace=namespace, watch=True, timeout_seconds=120)",
"_____no_output_____"
]
],
[
[
"### Rollout more traffic to canary of the InferenceService\nRollout traffice percent to 50% to canary version.",
"_____no_output_____"
]
],
[
[
"KFServing.rollout_canary('flower-sample', percent=50, namespace=namespace,\n watch=True, timeout_seconds=120)",
"_____no_output_____"
]
],
[
[
"Users send request to service 100 times.",
"_____no_output_____"
]
],
[
[
"%%bash\n\nMODEL_NAME=flowers-sample\nINPUT_PATH=@./kfserving-flowers-input.json\nINGRESS_GATEWAY=istio-ingressgateway\nSERVICE_HOSTNAME=$(kubectl get inferenceservice ${MODEL_NAME} -n $namespace -o jsonpath='{.status.url}' | cut -d \"/\" -f 3)\n\nfor i in {0..100};\ndo\n curl -v -H \"Host: ${SERVICE_HOSTNAME}\" http://localhost:8080/v1/models/$MODEL_NAME:predict -d $INPUT_PATH;\ndone",
"_____no_output_____"
]
],
[
[
"check if traffic is split",
"_____no_output_____"
]
],
[
[
"%%bash\n\ndefault_count=$(kubectl get replicaset -n $namespace -l serving.knative.dev/configuration=flowers-sample-predictor-default -o jsonpath='{.items[0].status.observedGeneration}')\ncanary_count=$(kubectl get replicaset -n $namespace -l serving.knative.dev/configuration=flowers-sample-predictor-canary -o jsonpath='{.items[0].status.observedGeneration}')\n\necho \"\\nThe count of traffic route to default: $default_count\"\necho \"The count of traffic route to canary: $canary_count\"",
"_____no_output_____"
]
],
[
[
"### Promote Canary to Default",
"_____no_output_____"
]
],
[
[
"KFServing.promote('flower-sample', namespace=namespace, watch=True, timeout_seconds=120)",
"_____no_output_____"
]
],
[
[
"### Delete the InferenceService",
"_____no_output_____"
]
],
[
[
"KFServing.delete('flower-sample', namespace=namespace)",
"_____no_output_____"
]
],
[
[
"## 2. Sample for Kfserving YAML\n\nNote: You should execute all the code blocks in your local environment.",
"_____no_output_____"
],
[
"### Create the InferenceService\nApply the CRD",
"_____no_output_____"
]
],
[
[
"!kubectl apply -n $namespace -f kfserving-flowers.yaml ",
"_____no_output_____"
]
],
[
[
"Expected Output\n```\n$ inferenceservice.serving.kubeflow.org/flowers-sample configured\n```",
"_____no_output_____"
],
[
"### Run a prediction\n\nUse `istio-ingressgateway` as your `INGRESS_GATEWAY` if you are deploying KFServing as part of Kubeflow install, and not independently.\n",
"_____no_output_____"
]
],
[
[
"%%bash\n\nMODEL_NAME=flowers-sample\nINPUT_PATH=@./kfserving-flowers-input.json\nINGRESS_GATEWAY=istio-ingressgateway\nSERVICE_HOSTNAME=$(kubectl get inferenceservice ${MODEL_NAME} -n $namespace -o jsonpath='{.status.url}' | cut -d \"/\" -f 3)\n\ncurl -v -H \"Host: ${SERVICE_HOSTNAME}\" http://localhost:8080/v1/models/$MODEL_NAME:predict -d $INPUT_PATH",
"_____no_output_____"
]
],
[
[
"If you stop making requests to the application, you should eventually see that your application scales itself back down to zero. Watch the pod until you see that it is `Terminating`. This should take approximately 90 seconds.",
"_____no_output_____"
]
],
[
[
"!kubectl get pods --watch -n $namespace",
"_____no_output_____"
]
],
[
[
"Note: To exit the watch, use `ctrl + c`.",
"_____no_output_____"
],
[
"### Canary Rollout\n\nTo test a canary rollout, you can use the tensorflow-canary.yaml \n",
"_____no_output_____"
],
[
"Apply the CRD",
"_____no_output_____"
]
],
[
[
"!kubectl apply -n $namespace -f kfserving-flowers-canary.yaml ",
"_____no_output_____"
]
],
[
[
"To verify if your traffic split percenage is applied correctly, you can use the following command:",
"_____no_output_____"
]
],
[
[
"!kubectl get inferenceservices -n $namespace",
"_____no_output_____"
]
],
[
[
"The output should looks the similar as below:\n```\nNAME READY URL DEFAULT TRAFFIC CANARY TRAFFIC AGE\nflowers-sample True http://flowers-sample.default.example.com 90 10 48s\n```",
"_____no_output_____"
]
],
[
[
"%%bash\n\nMODEL_NAME=flowers-sample\nINPUT_PATH=@./kfserving-flowers-input.json\nINGRESS_GATEWAY=istio-ingressgateway\nSERVICE_HOSTNAME=$(kubectl get inferenceservice ${MODEL_NAME} -n $namespace -o jsonpath='{.status.url}' | cut -d \"/\" -f 3)\n\nfor i in {0..100};\ndo\n curl -v -H \"Host: ${SERVICE_HOSTNAME}\" http://localhost:8080/v1/models/$MODEL_NAME:predict -d $INPUT_PATH;\ndone",
"_____no_output_____"
]
],
[
[
"Verify if traffic split",
"_____no_output_____"
]
],
[
[
"%%bash\n\ndefault_count=$(kubectl get replicaset -n $namespace -l serving.knative.dev/configuration=flowers-sample-predictor-default -o jsonpath='{.items[0].status.observedGeneration}')\ncanary_count=$(kubectl get replicaset -n $namespace -l serving.knative.dev/configuration=flowers-sample-predictor-canary -o jsonpath='{.items[0].status.observedGeneration}')\n\necho \"\\nThe count of traffic route to default: $default_count\"\necho \"The count of traffic route to canary: $canary_count\"",
"_____no_output_____"
]
],
[
[
"### Clean Up Resources",
"_____no_output_____"
]
],
[
[
"!kubectl delete inferenceservices flowers-sample -n $namespace",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c421e0de753be2695d67b50c1c584ac71bbaac | 278,759 | ipynb | Jupyter Notebook | _notebooks/2022-03-28-mlp-classifier.ipynb | giastantino/repository | 72b8b5a6b0b33ae0aec735c01059da4eb4ef90e3 | [
"Apache-2.0"
] | null | null | null | _notebooks/2022-03-28-mlp-classifier.ipynb | giastantino/repository | 72b8b5a6b0b33ae0aec735c01059da4eb4ef90e3 | [
"Apache-2.0"
] | null | null | null | _notebooks/2022-03-28-mlp-classifier.ipynb | giastantino/repository | 72b8b5a6b0b33ae0aec735c01059da4eb4ef90e3 | [
"Apache-2.0"
] | null | null | null | 156.080067 | 104,077 | 0.858717 | [
[
[
"# \"Folio 03: MLP Classifier\"\n> \"[ML 3/3] Use Neural Networks for Data Classification with Keras\"\n\n- toc: true\n- branch: master\n- badges: true\n- image: images/ipynb/mlp_clf_main.png\n- comments: false\n- author: Giaco Stantino\n- categories: [portfolio project, machine learning]\n- hide: false\n- search_exclude: true\n- permalink: /blog/folio-mlp-classifier",
"_____no_output_____"
],
[
"\n# <center> Intro </center>\n\nI would like to try something new. I want to build something more *explainatory* and *exploratory*... to use my experience and knowledge gained while working on computer vision for my master's thesis. \n\nI think this is the perfect oportunity! In the third part of the ML notebooks for the Folio series, we will build the MLP classifier with special focus on explaining techniques to improve accuracy and regularization. In this notebook I'll try to explain some core neural network features and training with Keras library. \n\n***\n\n**Task:** Classify existing clients into marketing segments based on bank statistics.\n\nThere are four client segments in the data. If you want to know more, check out [the clustering post](https://giacostantino.com/blog/folio-clustering), where the process of creating the segments is described.\n\n| card master | bill payer | golden fish | barrel scraper |\n|:---:|:---:|:---:|:---:|\n| <img src=\"https://github.com/giastantino/repository/blob/main/images/ipynb/card_master.png?raw=true\" width=\"200\" height=\"100\"> | <img src=\"https://github.com/giastantino/repository/blob/main/images/ipynb/bill_payer.png?raw=true\" width=\"200\" height=\"100\"> | <img src=\"https://github.com/giastantino/repository/blob/main/images/ipynb/golden_fish.png?raw=true\" width=\"200\" height=\"100\"> | <img src=\"https://github.com/giastantino/repository/blob/main/images/ipynb/barrel_scraper.png?raw=true\" width=\"200\" height=\"100\"> |\n\n***",
"_____no_output_____"
]
],
[
[
"#collapse-hide\nimport pandas as pd\nimport numpy as np\nimport tensorflow as tf\n\n# # # set random seeds\nimport random\nrandom.seed(42)\nnp.random.seed(42)\ntf.random.set_seed(42)\n# # #\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline",
"_____no_output_____"
],
[
"#collapse-hide\ndef getDataRepo(data='demographics'):\n \"\"\"\n returns specified data from Repository as DataFrame\n Parameters:\n path - ['demographics', 'statistics' or 'segments']; default: 'demographics'\n \"\"\"\n try:\n url = 'https://github.com/giastantino/PortfolioProject/blob/main/Notebooks/Data/' + data + '.csv?raw=True'\n client_df = pd.read_csv(url)\n print(data + ' data has been read')\n return client_df\n except Exception as e:\n print(e)",
"_____no_output_____"
],
[
"#collapse-hide\n#hide-output\n# read demographic data\ndemo_df = getDataRepo('statistics')\n\n# read segments data\nsegment_df = getDataRepo('segments')\n\n# create df for clients with known segment\nclient_df = demo_df.merge(segment_df, on='client_id')\n\n# create df for clients with unknown segment\nunknown_df = demo_df[~demo_df['client_id'].isin(segment_df['client_id'])].reset_index(drop=True)",
"statistics data has been read\nsegments data has been read\n"
]
],
[
[
"# Data transformation\n\nWe are going to use power transform as in clustering notebook. \n\nLet's check if anything is missing in imported data.",
"_____no_output_____"
]
],
[
[
"#collapse-hide\nprint(f\"NaN values in the data:\\n-----------------\\n{client_df.isna().sum()}\")",
"NaN values in the data:\n-----------------\nclient_id 0\nattrition_flag 0\nmonths_on_book 0\nproducts_num_held_by_client 0\ninactive_months_in_last_year 0\ncontacts_in_last_year 0\naverage_open_to_buy_credit_line 0\ntotal_transaction_amount_change_q4_q1 0\ntotal_transaction_amount_last_year 0\ntotal_transaction_count_last_year 0\ntotal_trasaction_count_change_q4_q1 0\naverage_card_utilazation_ratio 0\nsegment 0\ndtype: int64\n"
],
[
"#hide\nprint(client_df.info())",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 8223 entries, 0 to 8222\nData columns (total 13 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 client_id 8223 non-null int64 \n 1 attrition_flag 8223 non-null object \n 2 months_on_book 8223 non-null int64 \n 3 products_num_held_by_client 8223 non-null int64 \n 4 inactive_months_in_last_year 8223 non-null int64 \n 5 contacts_in_last_year 8223 non-null int64 \n 6 average_open_to_buy_credit_line 8223 non-null float64\n 7 total_transaction_amount_change_q4_q1 8223 non-null float64\n 8 total_transaction_amount_last_year 8223 non-null float64\n 9 total_transaction_count_last_year 8223 non-null int64 \n 10 total_trasaction_count_change_q4_q1 8223 non-null int64 \n 11 average_card_utilazation_ratio 8223 non-null float64\n 12 segment 8223 non-null object \ndtypes: float64(4), int64(7), object(2)\nmemory usage: 899.4+ KB\nNone\n"
]
],
[
[
"***\n\nDefine data X and targets y for our neural network.",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\n\n# define target y and data X\nX = client_df.drop(['client_id','segment', 'attrition_flag'],axis=1).values\ny = client_df[['segment']]",
"_____no_output_____"
]
],
[
[
"Transform the data and encode the targets. We need to one-hot encode the target classes as neural network will return 'probability' for each class in separate neurons. ",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import PowerTransformer, LabelEncoder\nfrom keras.utils import np_utils\n\n# encode data X\nptr = PowerTransformer()\nX = ptr.fit_transform(X)\n\n# encode target y\nencoder = LabelEncoder()\nencoded_y = encoder.fit_transform(np.ravel(y))\n# convert integers to dummy variables (i.e. one hot encoded)\ndummy_y = np_utils.to_categorical(encoded_y)",
"_____no_output_____"
]
],
[
[
"Split the data into training and testing dataset.",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\n\n# define the 80_20 train_test splits \nX_train, X_test, y_train, y_test = train_test_split(X, dummy_y, test_size=0.2, random_state=42)",
"_____no_output_____"
]
],
[
[
"# <center> Baseline model </center>\n\nIn this notebook we are using Multilayer Perceptron as our classifier. Let's define the base model with 10 neurons in the hidden layer.",
"_____no_output_____"
]
],
[
[
"from keras.models import Sequential\nfrom keras.layers import Dense, Dropout\n\n# define number of input features\ninput_features = X_train.shape[1]\n\n# uncompiled model\ndef uncompiled_base(input_dim=4):\n\t# create model\n model = Sequential()\n model.add(Dense(10, input_dim=input_dim, activation='linear'))\n model.add(Dense(4, activation='softmax'))\n\t\n return model\n\n# baseline model\ndef baseline_model(input_dim):\n model = uncompiled_base(input_dim=input_dim)\n \n # Compile model\n model.compile(loss='categorical_crossentropy', \n optimizer='adam', \n metrics=['accuracy'])\n\n return model\n\n# define model\nmlp = baseline_model(input_dim=input_features)\n\n# checkup model\nmlp.summary()",
"Model: \"sequential\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n dense (Dense) (None, 10) 110 \n \n dense_1 (Dense) (None, 4) 44 \n \n=================================================================\nTotal params: 154\nTrainable params: 154\nNon-trainable params: 0\n_________________________________________________________________\n"
],
[
"# train baseline model\nhistory = mlp.fit(X_train, y_train, epochs=20, validation_split=0.2, verbose=0)\nprint(f'BASELINE\\taccuracy: {history.history[\"accuracy\"][-1]*100:.2f} \\t validation accuracy: {history.history[\"val_accuracy\"][-1]*100:.2f}')",
"BASELINE\taccuracy: 96.64 \t validation accuracy: 97.11\n"
]
],
[
[
"The baseline model with linear neuron scores around 97% in training. Let's try to best it with modifications.",
"_____no_output_____"
],
[
"# <center> Activation Function </center>\n\nThe activation function is performed on the sum of all inputs, for the classic perceptron it was a unipolar function, which output was a binary function. However, in modern neural networks, and thus MLP as well, a different approach is commonly used, such as the ReLU or Sigmoid function. \n\n<center><img src=\"https://github.com/giastantino/repository/blob/main/images/ipynb/mlp_clf_neuron.png?raw=true\" width=\"500\"></center>\n\n***",
"_____no_output_____"
],
[
"Let's consider three activation functions and how they affect our model: [sigmoid](#), [ReLU](#) and [softplus](#)\n\n<center><img src=\"https://github.com/giastantino/repository/blob/main/images/ipynb/mlp_clf_activationfunction.png?raw=true\" width=\"500\"></center>\n\nSigmoid function squishes any real number into a range between 0 and 1, mathematically 1 / (1 + np.exp(-x)). ReLU is a function that for negative input returns 0 and for postive returns itself. SoftPlus is a smooth approximation to the ReLU function.\n\n\n> Note: In the model we are using softmax fuction in the final layer to make multiclass predictions.",
"_____no_output_____"
]
],
[
[
"def get_active_model(input_dim=4, act_fun='relu'):\n\t# create model\n model = Sequential()\n model.add(Dense(10, input_dim=input_dim, activation=act_fun))\n model.add(Dense(4, activation='softmax'))\n\t\n # compile model\n model.compile(loss='categorical_crossentropy', \n optimizer='adam', \n metrics=['accuracy'])\n return model\n\n# define mlp with relu activation\nsigm_mlp = get_active_model(input_dim=input_features, act_fun='sigmoid')\n# define mlp with relu activation\nrelu_mlp = get_active_model(input_dim=input_features, act_fun='relu')\n# define mlp with relu activation\nsoft_mlp = get_active_model(input_dim=input_features, act_fun='softmax')",
"_____no_output_____"
],
[
"#collapse-hide\n# train sigmoid model\nsigm_story = sigm_mlp.fit(X_train, y_train, epochs=20, validation_split=0.2, verbose=0)\nprint(f'Sigmoid model \\taccuracy: {sigm_story.history[\"accuracy\"][-1]*100:.2f} \\t validation accuracy: {sigm_story.history[\"val_accuracy\"][-1]*100:.2f}')\n\n# train relu model\nrelu_story = relu_mlp.fit(X_train, y_train, epochs=20, validation_split=0.2, verbose=0)\nprint(f'ReLU model \\taccuracy: {relu_story.history[\"accuracy\"][-1]*100:.2f} \\t validation accuracy: {relu_story.history[\"val_accuracy\"][-1]*100:.2f}')\n\n# train softmax model\nsoft_story = soft_mlp.fit(X_train, y_train, epochs=20, validation_split=0.2, verbose=0)\nprint(f'Softmax model \\taccuracy: {soft_story.history[\"accuracy\"][-1]*100:.2f} \\t validation accuracy: {soft_story.history[\"val_accuracy\"][-1]*100:.2f}')",
"Sigmoid model \taccuracy: 95.97 \t validation accuracy: 95.44\nReLU model \taccuracy: 97.21 \t validation accuracy: 97.19\nSoftmax model \taccuracy: 96.45 \t validation accuracy: 96.20\n"
]
],
[
[
"In our case model with ReLU activation function outperform other MLPs. It has 0.0.6% better accuracy than the baseline model with linear function, which is 17% error reduction! Let's use the ReLU function for further experiments.\n\n> Warning: Sometimes sigmoid or softmax function may outperform ReLu based models when changing the architecture, eg. adding extra layers. Therefore it is important to conduct experiments in methodical fashion.",
"_____no_output_____"
],
[
"# <center> Number of layers and neurons </center>\n\n<center><img src=\"https://github.com/giastantino/repository/blob/main/images/ipynb/mlp_clf_layers.png?raw=true\" width=\"500\"></center>\n\n\nMLP stands for multi layer perceptron and so far we were using model with input, hidden and output layer. Let's try adding hidden layers and increasing number of neurons.",
"_____no_output_____"
]
],
[
[
"def get_model(input_dim=4, hidden_layers=[], act_fun='relu'):\n\t# create model\n model = Sequential()\n # add initial layer\n model.add(Dense(hidden_layers[0], input_dim=input_dim, activation=act_fun))\n # add hidden layers\n for layer in hidden_layers[1:]:\n model.add(Dense(layer, activation=act_fun))\n # add final layer\n model.add(Dense(4, activation='softmax'))\n\t# compile model\n model.compile(loss='categorical_crossentropy', \n optimizer='adam', \n metrics=['accuracy'])\n return model",
"_____no_output_____"
],
[
"# defince hidden layers to inspect\nhidden = dict()\nhidden[0] = [10]\nhidden[1] = [50]\nhidden[2] = [10, 10]\nhidden[3] = [50, 50]\nhidden[4] = [100, 100]\nhidden[5] = [10, 10, 10]\nhidden[6] = [20, 20, 20]\nhidden[7] = [50, 50, 50]\nhidden[8] = [100, 100, 100]\nhidden[9] = [100, 50, 100]\n\n# get accuracies for models with hidden layers\nfor hid in hidden:\n model = get_model(input_dim=input_features, hidden_layers=hidden[hid])\n story = model.fit(X_train, y_train, epochs=20, batch_size=50, validation_split=0.2, verbose=0)\n print(f'hidden layers: {hidden[hid]} \\taccuracy: {story.history[\"accuracy\"][-1]*100:.2f} \\t validation accuracy: {story.history[\"val_accuracy\"][-1]*100:.2f}')",
"hidden layers: [10] \taccuracy: 96.37 \t validation accuracy: 96.05\nhidden layers: [50] \taccuracy: 98.19 \t validation accuracy: 97.64\nhidden layers: [10, 10] \taccuracy: 96.88 \t validation accuracy: 96.20\nhidden layers: [50, 50] \taccuracy: 99.14 \t validation accuracy: 97.80\nhidden layers: [100, 100] \taccuracy: 99.28 \t validation accuracy: 97.87\nhidden layers: [10, 10, 10] \taccuracy: 97.74 \t validation accuracy: 96.43\nhidden layers: [20, 20, 20] \taccuracy: 98.29 \t validation accuracy: 96.88\nhidden layers: [50, 50, 50] \taccuracy: 99.24 \t validation accuracy: 97.42\nhidden layers: [100, 100, 100] \taccuracy: 99.43 \t validation accuracy: 97.34\nhidden layers: [100, 50, 100] \taccuracy: 99.22 \t validation accuracy: 97.64\n"
]
],
[
[
"We can see that increasing number of layers and neurons has a positive impact on training accuracy, however it is not that obvious for validation subset. We can see that models bigger than `[50, 50]` hidden layers do better in training but not in validation - this suggests that bigger models are overfitting. In other words, bigger models are capable of learning traning data by heart, which is not desirable for test data.\n\nLet's inspect further the best performing model: `[input, 50, 50, output]`",
"_____no_output_____"
],
[
"# <center> Epochs and Batches </center>\n\nWe can increase model performance with more training epochs - number of times MLP has seen traning data. Moreover, we can also manipulate number of data points model sees in each iteration of the epoch - so called batches of data.",
"_____no_output_____"
],
[
"## Batch Size\n\nLet's inspect validation accuracy of chosen model with diffrent batch sizes: online training (batch_size=1), 10, 20, 50, 100.",
"_____no_output_____"
]
],
[
[
"# batch sizes\nbatches = [1, 10, 20, 50, 100]\n# model results history dictionary\nhistory = dict()\n# define model\nmlp = get_model(input_dim=input_features, hidden_layers=[50,50])\n# iterate through batch sizes\nfor b in batches:\n story = mlp.fit(X_train, y_train, validation_split=0.2, epochs=100, batch_size=b, verbose=0)\n history[b] = story.history['val_accuracy']",
"_____no_output_____"
],
[
"#collapse-hide\n# plot validation accuracies\nfig, ax = plt.subplots(figsize=(14, 8))\nline = sns.lineplot(data = pd.DataFrame(history))\n\n# setup the plot\nline.xaxis.grid(True)\nline.set_xlabel('Epoch', fontsize=14, labelpad = 10)\nline.set_ylabel('Accuracy', fontsize=14, labelpad = 10)\nline.set_title('Validation accuracy', fontsize=20, pad=15)\nline.set_xlim(1, 99)\nline.set_ylim(0.955, None);",
"_____no_output_____"
]
],
[
[
"In the above figure we can clearly see that accuracy score for `batch_size = 1` is very irregular and thus not predictible. On the other hand, for large batch sizes score is stable, which suggest that network doesn't learn new features with consequent epochs. Therefore we are going to proceed with `batch_size = 10`, which looks like good middle ground in our case.\n",
"_____no_output_____"
],
[
"## Overfitting vs number of epochs\n\nHaving chosen the architecture and batch size. we shall check how our model behaves in terms of loss (cost function). How many epochs improve model predictions and if the overfitting occurs.",
"_____no_output_____"
]
],
[
[
"# fit the model\nmodel = get_model(input_dim=input_features, hidden_layers=[50,50],)\nhistory = model.fit(X_train, y_train, validation_split=0.2, epochs=150, batch_size=10, verbose=0)",
"_____no_output_____"
],
[
"#collapse-hide\n# define dataframe with model metrics\ndf = pd.DataFrame(history.history, columns=history.history.keys())\n\n# setup figure\nfig, (ax1, ax2) = plt.subplots(figsize=(22, 6), ncols=2)\n\n# plot model accuracy \nfig1 = sns.lineplot(data=df[['accuracy','val_accuracy']], palette='YlGnBu', ax=ax1)\nfig1.set_title('Model accuracy', fontsize=20, pad=15)\nfig1.set_ylabel('accuracy', fontsize=14, labelpad = 10)\nfig1.set_xlabel('epoch', fontsize=14, labelpad = 10)\nfig1.legend(['train', 'valid'], loc='upper left')\n\n# plot model loss function\nfig2 = sns.lineplot(data=df[['loss','val_loss']], palette='YlGnBu', ax=ax2)\nfig2.set_title('Loss function', fontsize=20, pad=15)\nfig2.set_ylabel('loss', fontsize=14, labelpad = 10)\nfig2.set_xlabel('epoch', fontsize=14, labelpad = 10)\nfig2.legend(['train', 'valid'], loc='upper left');",
"_____no_output_____"
]
],
[
[
"In the above graphs, there are plotted model scores for accuracy on the left and for the value of loss function on the right. We can see that model doesn't improve in term's of accuracy after ca. 60 epochs. On the other hand it starts to overfit the data after ca. 40 epochs - the loss function for training and validation spreads out in different directions.\n\nIn order to prevent model from overfitting we will use some regularization techniques.\n",
"_____no_output_____"
],
[
"# Regularization\n\nHaving read this notebook we could relize how complex neural networks are. This makes them more prone to overfitting. Regularization is a technique which makes slight modifications to the learning algorithm such that the model generalizes better. This in turn improves the model performance on the unseen data as well.",
"_____no_output_____"
],
[
"## Dropout\n\nThis is one of the most interesting types of regularization techniques. It also produces very good results and is consequently the most frequently used regularization technique in the field of deep learning.\n\nSo what does dropout do? At every iteration, it randomly selects some nodes and removes them along with all of their incoming and outgoing connections as shown below.\n\n<center><img src=\"https://github.com/giastantino/repository/blob/main/images/ipynb/mlp_clf_dropout.png?raw=true\" width=\"300\"></center>\n",
"_____no_output_____"
],
[
"## Early stopping\n\nEarly stopping is a kind of cross-validation strategy where we keep one part of the training set as the validation set. When we see that the performance on the validation set is getting worse, we immediately stop the training on the model. This is known as early stopping.\n\n<center><img src=\"https://github.com/giastantino/repository/blob/main/images/ipynb/mlp_clf_earlystopping.png?raw=true\" width=\"500\"></center>",
"_____no_output_____"
],
[
"# <center> Summary </center>\n\nLet's sum up everything in final code cell. Firstly, we define the `get_model` function with hidden layer and drop rate. Then, we create callback objects which represent early stopping and saving the checkpoint model - currently the best model based on test accuracy.",
"_____no_output_____"
]
],
[
[
"from keras.callbacks import EarlyStopping\nfrom keras.callbacks import ModelCheckpoint\nfrom keras.models import load_model\nfrom tqdm.keras import TqdmCallback\n\n# get model\ndef get_model(input_dim=4, layers=[], drop_rate=0.2, act_fun='relu'):\n\t# create model\n model = Sequential()\n # add initial layer with dropout\n model.add(Dense(layers[0], input_dim=input_dim, activation=act_fun))\n model.add(Dropout(drop_rate))\n # add hidden layers\n for layer in layers[1:]:\n model.add(Dense(layer, activation=act_fun))\n model.add(Dropout(drop_rate))\n # add final layer \n model.add(Dense(4, activation='softmax'))\n\t# compile the model\n model.compile(loss='categorical_crossentropy', \n optimizer='adam', \n metrics=['accuracy'])\n return model\n\n# early stopping\nes = EarlyStopping(monitor=\"val_loss\",\n patience=20, # how long to wait for loss to decrease\n verbose=0,\n mode=\"min\",\n restore_best_weights=False,\n )\n# model checkpoints\nmc = ModelCheckpoint('best_model.h5',\n monitor='val_accuracy',\n mode='max',\n verbose=0, # set 1 to see when model is saved\n save_best_only=True\n )\n# define model\nmodel = get_model(input_dim=input_features, layers=[50,50], drop_rate=0.2)\n# fit model\nhistory = model.fit(X_train, y_train, \n validation_data=(X_test, y_test), \n epochs=150, \n verbose=0, \n callbacks=[es, mc, TqdmCallback(verbose=1)],\n )\n# load the saved model\nsaved_model = load_model('best_model.h5')\n# evaluate the model\n_, train_acc = saved_model.evaluate(X_train, y_train, verbose=0)\n_, test_acc = saved_model.evaluate(X_test, y_test, verbose=0)\nprint('\\nTrain: %.3f, Test: %.3f' % (train_acc, test_acc))\n\n",
"_____no_output_____"
]
],
[
[
"We can see that the early stopping strategy worked and stopped training. The best model for completed epochs, and thus our final model, has the highest accuracy of all neural networks in this notebook. \n\nLet's plot confusion matrix of the predictions made by the final MLP.",
"_____no_output_____"
]
],
[
[
"#collapse-hide\nfrom sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay\n\ny_pred = saved_model.predict(X_test)\ny_test_class = np.argmax(y_test, axis=1)\ny_pred_class = np.argmax(y_pred, axis=1)\n\nfig, ax = plt.subplots(figsize=(6, 6))\n\nConfusionMatrixDisplay.from_predictions(y_test_class, y_pred_class, cmap='YlGnBu_r', colorbar = False, ax = ax)\nax.set_title('Estimator Confusion Matrix', fontsize=20, pad=20)\nax.set_xlabel('Predicited', fontsize = 14, labelpad=10)\nax.set_ylabel('True', fontsize = 16, labelpad=10);",
"_____no_output_____"
]
],
[
[
"Our neural network has not only high accuracy, but also precision of predictions is very high, as show in the confusion matrix above.",
"_____no_output_____"
],
[
"# Final thoughts\n\nThe goal was to create the MLP classifier. To complete the task we used Keras. We have created a model that is very good at learning data, and thanks to regularization techniques, it generalizes and predicts with hich precision and accuracy on unseen data. ",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
d0c43947c9b9ce200fd0125c3c0fc4e81706ad6a | 28,009 | ipynb | Jupyter Notebook | webscraping and Mongo/mission_to_mars.ipynb | SairaJahangir/mission_to_mars | b00af9f3bbbb554867095423640f97fa1e76921a | [
"MIT"
] | null | null | null | webscraping and Mongo/mission_to_mars.ipynb | SairaJahangir/mission_to_mars | b00af9f3bbbb554867095423640f97fa1e76921a | [
"MIT"
] | null | null | null | webscraping and Mongo/mission_to_mars.ipynb | SairaJahangir/mission_to_mars | b00af9f3bbbb554867095423640f97fa1e76921a | [
"MIT"
] | null | null | null | 35.499366 | 1,051 | 0.465493 | [
[
[
"!pip install splinter\n",
"Requirement already satisfied: splinter in /Users/saira/anaconda3/lib/python3.7/site-packages (0.10.0)\nRequirement already satisfied: selenium>=3.141.0 in /Users/saira/anaconda3/lib/python3.7/site-packages (from splinter) (3.141.0)\nRequirement already satisfied: urllib3 in /Users/saira/anaconda3/lib/python3.7/site-packages (from selenium>=3.141.0->splinter) (1.23)\n"
],
[
"! pip install bs4\n",
"Requirement already satisfied: bs4 in /Users/saira/anaconda3/lib/python3.7/site-packages (0.0.1)\r\nRequirement already satisfied: beautifulsoup4 in /Users/saira/anaconda3/lib/python3.7/site-packages (from bs4) (4.6.3)\r\n"
],
[
"! pip install datetime\n",
"Requirement already satisfied: datetime in /Users/saira/anaconda3/lib/python3.7/site-packages (4.3)\nRequirement already satisfied: pytz in /Users/saira/anaconda3/lib/python3.7/site-packages (from datetime) (2018.5)\nRequirement already satisfied: zope.interface in /Users/saira/anaconda3/lib/python3.7/site-packages (from datetime) (4.5.0)\nRequirement already satisfied: setuptools in /Users/saira/anaconda3/lib/python3.7/site-packages (from zope.interface->datetime) (40.2.0)\n"
],
[
"import pandas as pd\nfrom splinter import Browser\nfrom bs4 import BeautifulSoup as bs\nfrom datetime import datetime\nimport os\nimport time",
"_____no_output_____"
],
[
"! brew cask install chromedriver",
"Updating Homebrew...\n\u001b[34m==>\u001b[0m \u001b[1mAuto-updated Homebrew!\u001b[0m\nUpdated 2 taps (homebrew/cask and homebrew/core).\n\u001b[34m==>\u001b[0m \u001b[1mNew Formulae\u001b[0m\nh3 libopenmpt protoc-gen-go sha3sum tesseract-lang\nlibkeccak mage reprepro spirv-cross\n\u001b[34m==>\u001b[0m \u001b[1mUpdated Formulae\u001b[0m\nabcde jdupes\nabcmidi jenkins\nactivemq-cpp jenkins-lts\naescrypt-packetizer jfrog-cli-go\nafflib juju\nafio kibana\nagedu libgweather\nalgernon libmagic\nallure libpqxx\namqp-cpp libpulsar\nangular-cli librsvg\nannie libtermkey\nansible links\napt-dater liquibase\narangodb logstash\naravis lynis\narm-linux-gnueabihf-binutils macvim\narmadillo mariadb\narpack [email protected]\nartifactory [email protected]\nasciidoctorj [email protected]\nasio mesa\nask-cli [email protected]\nats2-postiats ncompress\nautoconf-archive nexus\nautogen ninja\navfs nkf\nawscli nnn\nazure-storage-cpp node\nbabeld odpi\nbabl offlineimap\nbacula-fd opa\nbalena-cli openconnect\nballerina osquery\nbazel overmind\nbefunge93 pdftoedn\nbettercap petsc\nbgpstream petsc-complex\nbibtexconv pgroonga\nbigloo phoronix-test-suite\nbinaryen [email protected]\nbind phpunit\nbindfs picat\nblastem pixman\nbluetoothconnector pmd\nbmake postgresql\nboxes [email protected]\nbro [email protected]\nbuildifier [email protected]\nbwm-ng presto\ncaddy primesieve\ncassandra profanity\ncheckbashisms psql2csv\ncockroach pushpin\ncocoapods pypy\ncontainer-diff pypy3\nconvox pyside\ncrowdin qt\ndavmail rabbitmq\ndmd rgbds\ndnsviz rke\neiffelstudio rocksdb\nelasticsearch scrcpy\nembulk shellz\neslint ship\nfile-formula simutrans\nfish skaffold\nflake8 solr\nfonttools spatialindex\nfwup squid\ngeocode-glib step\ngerbil-scheme swiftformat\ngit-lfs tesseract\ngit-quick-stats texinfo\ngitlab-runner thors-serializer\ngmsh tmux-xpanes\ngnu-tar topgrade\ngoogle-java-format tor\ngradle-completion typescript\ngroonga umlet\ngx ungit\ngx-go unrar\nhandbrake urbit\nhaproxy vim\nharfbuzz wabt\nhelmfile weaver\nhledger weechat\nhub wtf\ni386-elf-binutils xtensor\nimagemagick yara\nimagemagick@6 ydcv\nipython youtube-dl\n\n\u001b[33mWarning:\u001b[0m Cask 'chromedriver' is already installed.\n\nTo re-install chromedriver, run:\n \u001b[32mbrew cask reinstall chromedriver\u001b[39m\n"
],
[
"# Capture path to Chrome Driver & Initialize browser\nbrowser = Browser(\"chrome\", headless=False)",
"_____no_output_____"
],
[
"# Page to Visit\nurl = \"https://mars.nasa.gov/news/\"\nbrowser.visit(url)",
"_____no_output_____"
],
[
"#using bs to write it into html\nhtml = browser.html\nsoup = bs(html,\"html.parser\")",
"_____no_output_____"
],
[
"news_title = soup.find(\"div\",class_=\"content_title\").text\nnews_p = soup.find(\"div\", class_=\"article_teaser_body\").text\nprint(f\"Title: {news_title}\")\nprint(f\"Para: {news_p}\")",
"Title: After a Reset, Curiosity Is Operating Normally\nPara: NASA's Mars rover Curiosity is in good health but takes a short break while engineers diagnose why it reset its computer. \n"
],
[
"# Mars Image\nurl_image = \"https://www.jpl.nasa.gov/spaceimages/?search=&category=featured#submit\"\nbrowser.visit(url_image)",
"_____no_output_____"
],
[
"from urllib.parse import urlsplit\nbase_url = \"{0.scheme}://{0.netloc}/\".format(urlsplit(url_image))\nprint(base_url)",
"https://www.jpl.nasa.gov/\n"
],
[
"#Design an xpath selector to grab the image\nxpath = \"//*[@id=\\\"page\\\"]/section[3]/div/ul/li[1]/a/div/div[2]/img\"",
"_____no_output_____"
],
[
"#Use splinter to click on the mars featured image\n#to bring the full resolution image\nresults = browser.find_by_xpath(xpath)\nimg = results[0]\nimg.click()",
"_____no_output_____"
],
[
"#get image url using BeautifulSoup\nhtml_image = browser.html\nsoup = bs(html_image, \"html.parser\")\nimg_url = soup.find(\"img\", class_=\"fancybox-image\")[\"src\"]\nfeatured_image_url = base_url + img_url\nprint(featured_image_url)",
"https://www.jpl.nasa.gov//spaceimages/images/largesize/PIA22911_hires.jpg\n"
],
[
"#get mars weather's latest tweet from the website\nurl_weather = \"https://twitter.com/marswxreport?lang=en\"\nbrowser.visit(url_weather)",
"_____no_output_____"
],
[
"html_weather = browser.html\nsoup = bs(html_weather, \"html.parser\")\nmars_weather = soup.find(\"p\", class_=\"TweetTextSize TweetTextSize--normal js-tweet-text tweet-text\").text\nprint(mars_weather)",
"Curiosity is again operating normally following a boot problem first experienced last Friday. Look for more Gale Crater weather conditions soon.\nhttps://www.jpl.nasa.gov/news/news.php?feature=7339 …pic.twitter.com/gFMfXyeWDa\n"
],
[
"# Mars Facts\nurl_facts = \"https://space-facts.com/mars/\"\ntable = pd.read_html(url_facts)\ntable[0]",
"_____no_output_____"
],
[
"df_mars_facts = table[0]\ndf_mars_facts.columns = [\"Parameter\", \"Values\"]\ndf_mars_facts.set_index([\"Parameter\"])",
"_____no_output_____"
],
[
"mars_html_table = df_mars_facts.to_html()\nmars_html_table = mars_html_table.replace(\"\\n\", \"\")\nmars_html_table",
"_____no_output_____"
],
[
"# Mars Hemisphere\nurl_hemisphere = \"https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars\"\nbrowser.visit(url_hemisphere)\n\n#Get base url\nhemisphere_base_url = \"{0.scheme}://{0.netloc}/\".format(urlsplit(url_hemisphere))\nprint(hemisphere_base_url)",
"https://astrogeology.usgs.gov/\n"
],
[
"#1 Hemisphere\nhemisphere_img_urls = []\nresults = browser.find_by_xpath( \"//*[@id='product-section']/div[2]/div[1]/a/img\").click()\ntime.sleep(2)\ncerberus_open_click = browser.find_by_xpath( \"//*[@id='wide-image-toggle']\").click()\ntime.sleep(1)\ncerberus_image = browser.html\nsoup = bs(cerberus_image, \"html.parser\")\ncerberus_url = soup.find(\"img\", class_=\"wide-image\")[\"src\"]\ncerberus_img_url = hemisphere_base_url + cerberus_url\nprint(cerberus_img_url)\ncerberus_title = soup.find(\"h2\",class_=\"title\").text\nprint(cerberus_title)\nback_button = browser.find_by_xpath(\"//*[@id='splashy']/div[1]/div[1]/div[3]/section/a\").click()\ncerberus = {\"image title\":cerberus_title, \"image url\": cerberus_img_url}\nhemisphere_img_urls.append(cerberus)",
"https://astrogeology.usgs.gov//cache/images/cfa62af2557222a02478f1fcd781d445_cerberus_enhanced.tif_full.jpg\nCerberus Hemisphere Enhanced\n"
],
[
"#2 Hemisphere\nresults1 = browser.find_by_xpath( \"//*[@id='product-section']/div[2]/div[2]/a/img\").click()\ntime.sleep(2)\nschiaparelli_open_click = browser.find_by_xpath( \"//*[@id='wide-image-toggle']\").click()\ntime.sleep(1)\nschiaparelli_image = browser.html\nsoup = bs(schiaparelli_image, \"html.parser\")\nschiaparelli_url = soup.find(\"img\", class_=\"wide-image\")[\"src\"]\nschiaparelli_img_url = hemisphere_base_url + schiaparelli_url\nprint(schiaparelli_img_url)\nschiaparelli_title = soup.find(\"h2\",class_=\"title\").text\nprint(schiaparelli_title)\nback_button = browser.find_by_xpath(\"//*[@id='splashy']/div[1]/div[1]/div[3]/section/a\").click()\nschiaparelli = {\"image title\":schiaparelli_title, \"image url\": schiaparelli_img_url}\nhemisphere_img_urls.append(schiaparelli)",
"https://astrogeology.usgs.gov//cache/images/3cdd1cbf5e0813bba925c9030d13b62e_schiaparelli_enhanced.tif_full.jpg\nSchiaparelli Hemisphere Enhanced\n"
],
[
"#3 Hemisphere\nresults1 = browser.find_by_xpath( \"//*[@id='product-section']/div[2]/div[3]/a/img\").click()\ntime.sleep(2)\nsyrtis_major_open_click = browser.find_by_xpath( \"//*[@id='wide-image-toggle']\").click()\ntime.sleep(1)\nsyrtis_major_image = browser.html\nsoup = bs(syrtis_major_image, \"html.parser\")\nsyrtis_major_url = soup.find(\"img\", class_=\"wide-image\")[\"src\"]\nsyrtis_major_img_url = hemisphere_base_url + syrtis_major_url\nprint(syrtis_major_img_url)\nsyrtis_major_title = soup.find(\"h2\",class_=\"title\").text\nprint(syrtis_major_title)\nback_button = browser.find_by_xpath(\"//*[@id='splashy']/div[1]/div[1]/div[3]/section/a\").click()\nsyrtis_major = {\"image title\":syrtis_major_title, \"image url\": syrtis_major_img_url}\nhemisphere_img_urls.append(syrtis_major)",
"https://astrogeology.usgs.gov//cache/images/ae209b4e408bb6c3e67b6af38168cf28_syrtis_major_enhanced.tif_full.jpg\nSyrtis Major Hemisphere Enhanced\n"
],
[
"#4 Hemisphere\nresults1 = browser.find_by_xpath( \"//*[@id='product-section']/div[2]/div[4]/a/img\").click()\ntime.sleep(2)\nvalles_marineris_open_click = browser.find_by_xpath( \"//*[@id='wide-image-toggle']\").click()\ntime.sleep(1)\nvalles_marineris_image = browser.html\nsoup = bs(valles_marineris_image, \"html.parser\")\nvalles_marineris_url = soup.find(\"img\", class_=\"wide-image\")[\"src\"]\nvalles_marineris_img_url = hemisphere_base_url + syrtis_major_url\nprint(valles_marineris_img_url)\nvalles_marineris_title = soup.find(\"h2\",class_=\"title\").text\nprint(valles_marineris_title)\nback_button = browser.find_by_xpath(\"//*[@id='splashy']/div[1]/div[1]/div[3]/section/a\").click()\nvalles_marineris = {\"image title\":valles_marineris_title, \"image url\": valles_marineris_img_url}\nhemisphere_img_urls.append(valles_marineris)\n",
"https://astrogeology.usgs.gov//cache/images/ae209b4e408bb6c3e67b6af38168cf28_syrtis_major_enhanced.tif_full.jpg\nValles Marineris Hemisphere Enhanced\n"
],
[
"hemisphere_img_urls",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c43f1ed60d36cf5aef83854010fb07f0331e61 | 66,670 | ipynb | Jupyter Notebook | resae/resae-13-for-digit-classification.ipynb | noirmist/res_ae | 0ad7ba3e76709e725c552e1848cfbbfdf42e492b | [
"MIT"
] | null | null | null | resae/resae-13-for-digit-classification.ipynb | noirmist/res_ae | 0ad7ba3e76709e725c552e1848cfbbfdf42e492b | [
"MIT"
] | null | null | null | resae/resae-13-for-digit-classification.ipynb | noirmist/res_ae | 0ad7ba3e76709e725c552e1848cfbbfdf42e492b | [
"MIT"
] | null | null | null | 112.618243 | 29,400 | 0.851357 | [
[
[
"# Realize ResAE \n# The decoder part only have the symmetic sturcture as the encoder, but weights and biase are initialized.\n# Let's have a try.",
"_____no_output_____"
],
[
"# Display the result\nimport matplotlib\nmatplotlib.use('Agg')\n%matplotlib inline\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"import utils\nimport Block",
"_____no_output_____"
],
[
"import os\nimport time\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow.contrib.layers as layers",
"_____no_output_____"
],
[
"# Step1 load MNITST data\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(\"MNIST_data\", \n one_hot=True,\n validation_size=2000)",
"Extracting MNIST_data/train-images-idx3-ubyte.gz\nExtracting MNIST_data/train-labels-idx1-ubyte.gz\nExtracting MNIST_data/t10k-images-idx3-ubyte.gz\nExtracting MNIST_data/t10k-labels-idx1-ubyte.gz\n"
],
[
"x_in = tf.placeholder(tf.float32, shape=[None,28,28,1],name='inputs')\nx_out = tf.placeholder(tf.float32, shape=[None, 28, 28, 1], name='outputs')\ncode_length = 128\ncode = tf.placeholder(tf.float32, shape=[None,code_length],name='code')\n\nis_training = tf.placeholder(tf.bool, name='is_training')",
"_____no_output_____"
]
],
[
[
"## Encoder part",
"_____no_output_____"
]
],
[
[
"# Reisudal blocks\nencode_flag = True\nnet = x_in\nblocks_en = [\n [(16, 8, 2)],\n [(32, 16, 2)],\n]\nfor i, block in enumerate(blocks_en):\n block_params = utils.get_block(block, is_training=is_training)\n # build the net\n block_obj = Block.Block(\n inputs = net,\n block_params = block_params,\n is_training = is_training,\n encode_flag=encode_flag,\n scope = 'block'+str(i),\n summary_flag = True\n )\n net = block_obj.get_block()",
"_____no_output_____"
],
[
"# get shape of last block\nencode_last_block_shape = net.get_shape()",
"_____no_output_____"
],
[
"# flatten layer\nwith tf.name_scope('flatten_en'):\n net = layers.flatten(net)\n tf.summary.histogram('flatten_en',net)\nflatten_length = int(net.get_shape()[-1])",
"_____no_output_____"
]
],
[
[
"## Encoder layer",
"_____no_output_____"
]
],
[
[
"with tf.name_scope('encoder_layer'):\n net = layers.fully_connected(\n inputs = net,\n num_outputs=code_length,\n activation_fn=tf.nn.relu,\n )\n tf.summary.histogram('encode_layer',net)\n code = net",
"_____no_output_____"
]
],
[
[
"## Decoder block",
"_____no_output_____"
]
],
[
[
"encode_last_block_shape[2]",
"_____no_output_____"
],
[
"with tf.name_scope('flatten_de'):\n net = layers.fully_connected(\n inputs = net,\n num_outputs=flatten_length,\n activation_fn=tf.nn.relu,\n )\n tf.summary.histogram('flatten_en', net)",
"_____no_output_____"
],
[
"# flatten to convolve\nwith tf.name_scope('flatten_to_conv'):\n net = tf.reshape(\n net, \n [-1, int(encode_last_block_shape[1]), \n int(encode_last_block_shape[2]), int(encode_last_block_shape[3])])",
"_____no_output_____"
],
[
"net.get_shape()",
"_____no_output_____"
],
[
"# Residual blocks\nblocks_de = [\n [(16, 16, 2)],\n [(1, 8, 2)],]\nfor i, block in enumerate(blocks_de):\n block_params = utils.get_block(block, is_training=is_training)\n # build the net\n block_obj = Block.Block(\n inputs = net,\n block_params = block_params,\n is_training = is_training,\n encode_flag=False,\n scope = 'block'+str(i),\n summary_flag = True\n )\n net = block_obj.get_block()\nx_out = net",
"_____no_output_____"
],
[
"# loss function\nwith tf.name_scope('loss'):\n cost = tf.reduce_mean(tf.square(x_out-x_in))\n tf.summary.scalar('loss', cost)",
"_____no_output_____"
],
[
"# learning rate\nwith tf.name_scope('learning_rate'):\n init_lr = tf.placeholder(tf.float32, name='LR')\n global_step = tf.placeholder(tf.float32, name=\"global_step\")\n decay_step = tf.placeholder(tf.float32, name=\"decay_step\")\n decay_rate = tf.placeholder(tf.float32, name=\"decay_rate\")\n learning_rate = tf.train.exponential_decay(\n learning_rate = init_lr ,\n global_step = global_step,\n decay_steps = decay_step,\n decay_rate = decay_rate,\n staircase=False,\n name=None) ",
"_____no_output_____"
],
[
"def feed_dict(train,batchsize=100,drop=0.5, lr_dict=None):\n \"\"\"Make a TensorFlow feed_dict: maps data onto Tensor placeholders.\"\"\"\n if train:\n xs, _ = mnist.train.next_batch(batchsize)\n f_dict = {x_in: xs.reshape([-1,28,28,1]), \n is_training: True}\n f_dict.update(lr_dict)\n else: \n # validation\n x_val,_ = mnist.validation.images,mnist.validation.labels\n f_dict = {x_in: x_val.reshape([-1,28,28,1]),\n is_training: False}\n return f_dict",
"_____no_output_____"
],
[
"# Train step \n# note: should add update_ops to the train graph\nupdate_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)\nwith tf.control_dependencies(update_ops):\n with tf.name_scope('train'):\n train_step = tf.train.AdamOptimizer(learning_rate).minimize(cost) ",
"_____no_output_____"
],
[
"# Merge all the summaries and write to logdir\nlogdir = './log'\nif not os.path.exists(logdir):\n os.mkdir(logdir)\nmerged = tf.summary.merge_all()\n# Initialize the variables\nsess = tf.InteractiveSession()\nsess.run(tf.global_variables_initializer())\ntrain_writer = tf.summary.FileWriter(logdir + '/train',\n sess.graph)\nval_writer = tf.summary.FileWriter(logdir + '/validation',\n sess.graph)",
"_____no_output_____"
],
[
"# Training the model by repeatedly running train_step\nimport time \nepochs = 100\nbatchsize= 100\nnum_batches = mnist.train.images.shape[0] // batchsize\n# num_batches = 200\n\nlr_init = 0.001\nd_rate = 0.9\n\nx_epoch = np.arange(0,epochs,1)\ny_loss_trn = np.zeros(x_epoch.shape)\ny_loss_val = np.zeros(x_epoch.shape)\n\n# Init all variables\ntimestamp = time.strftime('%Y-%m-%d: %H:%M:%S', time.localtime(time.time()))\nprint(\"[%s]: Epochs Trn_loss Val_loss\" % (timestamp))\nfor i in range(epochs):\n lr_dict = {init_lr: lr_init, global_step:i,\n decay_step: i, decay_step: batchsize,\n decay_rate: d_rate}\n loss_trn_all = 0.0\n for b in range(num_batches):\n train_dict = feed_dict(True,lr_dict=lr_dict)\n # train\n summary_trn, _, loss_trn = sess.run(\n [merged, train_step, cost], \n feed_dict=train_dict)\n loss_trn_all += loss_trn\n \n y_loss_trn[i] = loss_trn_all / num_batches\n train_writer.add_summary(summary_trn, i)\n # validation\n val_dict = feed_dict(False)\n summary_val, y_loss_val[i] = sess.run(\n [merged, cost],feed_dict=val_dict)\n val_writer.add_summary(summary_val, i)\n if i % 10 == 0:\n timestamp = time.strftime('%Y-%m-%d: %H:%M:%S', time.localtime(time.time()))\n print('[%s]: %d %.4f %.4f' % (timestamp, i, \n y_loss_trn[i], y_loss_val[i]))",
"[2018-01-27: 23:53:34]: Epochs Trn_loss Val_loss\n[2018-01-27: 23:53:43]: 0 0.0200 0.0058\n[2018-01-27: 23:55:10]: 10 0.0019 0.0018\n[2018-01-27: 23:56:36]: 20 0.0015 0.0015\n[2018-01-27: 23:58:01]: 30 0.0013 0.0014\n[2018-01-27: 23:59:27]: 40 0.0012 0.0013\n[2018-01-28: 00:00:53]: 50 0.0011 0.0012\n[2018-01-28: 00:02:20]: 60 0.0011 0.0012\n[2018-01-28: 00:03:46]: 70 0.0011 0.0011\n[2018-01-28: 00:05:26]: 80 0.0010 0.0011\n[2018-01-28: 00:07:16]: 90 0.0010 0.0011\n"
],
[
"plt.rcParams[\"figure.figsize\"] = [8.0,6.0]\n\nplt.plot(x_epoch, y_loss_trn)\nplt.plot(x_epoch, y_loss_val)\nplt.legend(['Training loss', 'Validation loss'])\nplt.xlabel('Epochs')\nplt.ylabel('Loss')",
"_____no_output_____"
],
[
"import pickle\ndata_dict = {\n \"x_epoch\": x_epoch,\n \"y_loss_trn\": y_loss_trn,\n \"y_loss_val\": y_loss_val,\n}\nwith open(\"./result_resae13.pkl\", 'wb') as fp:\n pickle.dump(data_dict, fp)",
"_____no_output_____"
],
[
"# test a image\nimg, _ = mnist.validation.next_batch(10)\nimg = img.reshape(-1,28,28,1)",
"_____no_output_____"
],
[
"img_est = sess.run(x_out, feed_dict={x_in: img, is_training: False})",
"_____no_output_____"
],
[
"def gen_norm(img):\n return (img-img.min())/(img.max() - img.min())\n\nn_examples = 10\nfig, axs = plt.subplots(3, n_examples, figsize=(10, 2))\nfor example_i in range(n_examples):\n # raw\n axs[0][example_i].imshow(np.reshape(img[example_i, :], (28, 28)), cmap='gray')\n axs[0][example_i].axis('off')\n # learned\n axs[1][example_i].imshow(np.reshape(img_est[example_i, :], (28, 28)), cmap='gray')\n axs[1][example_i].axis('off')\n # residual\n norm_raw = gen_norm(np.reshape(img[example_i, :], (28, 28)))\n norm_est = gen_norm(np.reshape(img_est[example_i, :],(28, 28)))\n axs[2][example_i].imshow(norm_raw - norm_est, cmap='gray')\n axs[2][example_i].axis('off')\n\nfig.show()\nplt.draw()",
"/home/mzx/.local/lib/python3.5/site-packages/matplotlib/figure.py:403: UserWarning: matplotlib is currently using a non-GUI backend, so cannot show the figure\n \"matplotlib is currently using a non-GUI backend, \"\n"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c440ec4dcfeb38b271c9f1015026b31a799b7c | 5,443 | ipynb | Jupyter Notebook | Activity Recognition.ipynb | mujahid2580/data-sets | 2d7fd52c4126475a6f0dc69aa594eeb45c649e92 | [
"MIT"
] | 1 | 2020-02-19T07:18:13.000Z | 2020-02-19T07:18:13.000Z | Activity Recognition.ipynb | mujahid2580/data-sets | 2d7fd52c4126475a6f0dc69aa594eeb45c649e92 | [
"MIT"
] | null | null | null | Activity Recognition.ipynb | mujahid2580/data-sets | 2d7fd52c4126475a6f0dc69aa594eeb45c649e92 | [
"MIT"
] | null | null | null | 25.553991 | 85 | 0.331251 | [
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"data = pd.read_csv(r\"C:\\Users\\LENOVO\\Desktop\\activity 1.csv\",header=None)",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
],
[
"data.to_csv(\"activity 1.csv\")",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code"
]
] |
d0c44adc0ae9cdb1eebb0c7ad22ceee108738c39 | 4,862 | ipynb | Jupyter Notebook | template.ipynb | vivard/awesome-notebooks | 899558bcc2165bb2155f5ab69ac922c6458e1799 | [
"BSD-3-Clause"
] | 1,114 | 2020-09-28T07:32:23.000Z | 2022-03-31T22:35:50.000Z | template.ipynb | vivard/awesome-notebooks | 899558bcc2165bb2155f5ab69ac922c6458e1799 | [
"BSD-3-Clause"
] | 298 | 2020-10-29T09:39:17.000Z | 2022-03-31T15:24:44.000Z | template.ipynb | vivard/awesome-notebooks | 899558bcc2165bb2155f5ab69ac922c6458e1799 | [
"BSD-3-Clause"
] | 153 | 2020-09-29T06:07:39.000Z | 2022-03-31T17:41:16.000Z | 25.061856 | 800 | 0.604484 | [
[
[
"<img width=\"10%\" alt=\"Naas\" src=\"https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160\"/>",
"_____no_output_____"
],
[
"# Tool - Action of the notebook\n<a href=\"https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/template.ipynb\" target=\"_parent\">\n<img src=\"https://img.shields.io/badge/-Open%20in%20Naas-success?labelColor=000000&logo=data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz4KPHN2ZyB3aWR0aD0iMTAyNHB4IiBoZWlnaHQ9IjEwMjRweCIgdmlld0JveD0iMCAwIDEwMjQgMTAyNCIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB4bWxuczp4bGluaz0iaHR0cDovL3d3dy53My5vcmcvMTk5OS94bGluayIgdmVyc2lvbj0iMS4xIj4KIDwhLS0gR2VuZXJhdGVkIGJ5IFBpeGVsbWF0b3IgUHJvIDIuMC41IC0tPgogPGRlZnM+CiAgPHRleHQgaWQ9InN0cmluZyIgdHJhbnNmb3JtPSJtYXRyaXgoMS4wIDAuMCAwLjAgMS4wIDIyOC4wIDU0LjUpIiBmb250LWZhbWlseT0iQ29tZm9ydGFhLVJlZ3VsYXIsIENvbWZvcnRhYSIgZm9udC1zaXplPSI4MDAiIHRleHQtZGVjb3JhdGlvbj0ibm9uZSIgZmlsbD0iI2ZmZmZmZiIgeD0iMS4xOTk5OTk5OTk5OTk5ODg2IiB5PSI3MDUuMCI+bjwvdGV4dD4KIDwvZGVmcz4KIDx1c2UgaWQ9Im4iIHhsaW5rOmhyZWY9IiNzdHJpbmciLz4KPC9zdmc+Cg==\"/>\n</a>",
"_____no_output_____"
],
[
"**Tags:** #tag1 #tag2",
"_____no_output_____"
],
[
"## Input",
"_____no_output_____"
],
[
"### Import library",
"_____no_output_____"
]
],
[
[
"import naas_drivers",
"_____no_output_____"
]
],
[
[
"### Variables",
"_____no_output_____"
]
],
[
[
"stock_name = 'NFLX' ",
"_____no_output_____"
]
],
[
[
"## Model",
"_____no_output_____"
],
[
"### Function",
"_____no_output_____"
]
],
[
[
"table = naas_drivers.yahoofinance.get(stock_name, date_from=-3600, date_to=\"today\", moving_averages=[20,50])",
"_____no_output_____"
]
],
[
[
"## Output",
"_____no_output_____"
],
[
"### Display result",
"_____no_output_____"
]
],
[
[
"table",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
d0c469e104fc7f32b7ebe2aae3990d9d2555064b | 1,279 | ipynb | Jupyter Notebook | source/tutorials/parallel.ipynb | yuanzunli/kdeLF-test | 8b60ee637088b9394e982c5555347f2d3838ccc2 | [
"MIT"
] | null | null | null | source/tutorials/parallel.ipynb | yuanzunli/kdeLF-test | 8b60ee637088b9394e982c5555347f2d3838ccc2 | [
"MIT"
] | null | null | null | source/tutorials/parallel.ipynb | yuanzunli/kdeLF-test | 8b60ee637088b9394e982c5555347f2d3838ccc2 | [
"MIT"
] | null | null | null | 17.763889 | 57 | 0.502737 | [
[
[
"(parallel)=\n\n# Parallelization",
"_____no_output_____"
]
],
[
[
"%config InlineBackend.figure_format = \"retina\"\n\nfrom matplotlib import rcParams\n\nrcParams[\"savefig.dpi\"] = 100\nrcParams[\"figure.dpi\"] = 100\nrcParams[\"font.size\"] = 20\n\nimport multiprocessing\n\nmultiprocessing.set_start_method(\"fork\")",
"_____no_output_____"
]
],
[
[
"coming soon...",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
d0c46a5a7c81b0ad83047606a63b0f29f2c9a565 | 1,199 | ipynb | Jupyter Notebook | notebooks/exploratory/demos/simple-gpy-demo.ipynb | lschlessinger1/MS-project | e1c02d1d1a7a2480ff6f14f30625dc42ee3417e3 | [
"MIT"
] | 2 | 2019-04-29T15:18:11.000Z | 2019-12-13T18:58:40.000Z | notebooks/exploratory/demos/simple-gpy-demo.ipynb | lschlessinger1/MS-project | e1c02d1d1a7a2480ff6f14f30625dc42ee3417e3 | [
"MIT"
] | 275 | 2019-02-19T22:59:39.000Z | 2020-10-03T08:56:08.000Z | notebooks/exploratory/demos/simple-gpy-demo.ipynb | lschlessinger1/MS-project | e1c02d1d1a7a2480ff6f14f30625dc42ee3417e3 | [
"MIT"
] | null | null | null | 20.322034 | 73 | 0.539616 | [
[
[
"#configure plotting\n%matplotlib inline\n%config InlineBackend.figure_format = 'svg'\nimport matplotlib;matplotlib.rcParams['figure.figsize'] = (8,5)\nfrom matplotlib import pyplot as plt\nimport GPy\nimport numpy as np",
"_____no_output_____"
],
[
"k1 = GPy.kern.RBF(1)\nk2 = GPy.kern.RBF(input_dim=1, variance = 78.75, lengthscale=2.)\nk3 = GPy.kern.RBF(1, .5, .5)",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code"
]
] |
d0c46b544f0dfbc4e4853033bf567d7c76ef2a37 | 24,280 | ipynb | Jupyter Notebook | session2/session2.ipynb | jimmysong/pw-exercises | 8d7fc065e9fe01399fa240ff88a7b1557901defb | [
"MIT"
] | 8 | 2019-02-21T04:22:48.000Z | 2020-07-24T11:03:16.000Z | session2/session2.ipynb | jimmysong/pw-exercises | 8d7fc065e9fe01399fa240ff88a7b1557901defb | [
"MIT"
] | null | null | null | session2/session2.ipynb | jimmysong/pw-exercises | 8d7fc065e9fe01399fa240ff88a7b1557901defb | [
"MIT"
] | 2 | 2020-01-23T16:24:16.000Z | 2020-02-10T23:00:29.000Z | 27.622298 | 426 | 0.609349 | [
[
[
"empty"
]
]
] | [
"empty"
] | [
[
"empty"
]
] |
d0c470165e76ae52acbf1a63cdc795c9bf9d758a | 152,952 | ipynb | Jupyter Notebook | 04_Classification_eval_metrics/notebook.ipynb | ziritrion/ml-zoomcamp | 1bd5dd5494b162fb347b445b025a3e088c7c42fe | [
"MIT"
] | 33 | 2021-10-14T02:38:20.000Z | 2022-02-24T19:46:36.000Z | 04_Classification_eval_metrics/notebook.ipynb | ziritrion/ml-zoomcamp | 1bd5dd5494b162fb347b445b025a3e088c7c42fe | [
"MIT"
] | null | null | null | 04_Classification_eval_metrics/notebook.ipynb | ziritrion/ml-zoomcamp | 1bd5dd5494b162fb347b445b025a3e088c7c42fe | [
"MIT"
] | 10 | 2021-11-13T19:54:18.000Z | 2022-02-01T06:44:20.000Z | 86.560272 | 25,830 | 0.839963 | [
[
[
"%autosave 0",
"_____no_output_____"
]
],
[
[
"# 4. Evaluation Metrics for Classification\n\nIn the previous session we trained a model for predicting churn. How do we know if it's good?\n\n\n## 4.1 Evaluation metrics: session overview \n\n* Dataset: https://www.kaggle.com/blastchar/telco-customer-churn\n* https://raw.githubusercontent.com/alexeygrigorev/mlbookcamp-code/master/chapter-03-churn-prediction/WA_Fn-UseC_-Telco-Customer-Churn.csv\n\n\n*Metric* - function that compares the predictions with the actual values and outputs a single number that tells how good the predictions are",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\n\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\nfrom sklearn.feature_extraction import DictVectorizer\nfrom sklearn.linear_model import LogisticRegression",
"_____no_output_____"
],
[
"df = pd.read_csv('data-week-3.csv')\n\ndf.columns = df.columns.str.lower().str.replace(' ', '_')\n\ncategorical_columns = list(df.dtypes[df.dtypes == 'object'].index)\n\nfor c in categorical_columns:\n df[c] = df[c].str.lower().str.replace(' ', '_')\n\ndf.totalcharges = pd.to_numeric(df.totalcharges, errors='coerce')\ndf.totalcharges = df.totalcharges.fillna(0)\n\ndf.churn = (df.churn == 'yes').astype(int)",
"_____no_output_____"
],
[
"df_full_train, df_test = train_test_split(df, test_size=0.2, random_state=1)\ndf_train, df_val = train_test_split(df_full_train, test_size=0.25, random_state=1)\n\ndf_train = df_train.reset_index(drop=True)\ndf_val = df_val.reset_index(drop=True)\ndf_test = df_test.reset_index(drop=True)\n\ny_train = df_train.churn.values\ny_val = df_val.churn.values\ny_test = df_test.churn.values\n\ndel df_train['churn']\ndel df_val['churn']\ndel df_test['churn']",
"_____no_output_____"
],
[
"numerical = ['tenure', 'monthlycharges', 'totalcharges']\n\ncategorical = [\n 'gender',\n 'seniorcitizen',\n 'partner',\n 'dependents',\n 'phoneservice',\n 'multiplelines',\n 'internetservice',\n 'onlinesecurity',\n 'onlinebackup',\n 'deviceprotection',\n 'techsupport',\n 'streamingtv',\n 'streamingmovies',\n 'contract',\n 'paperlessbilling',\n 'paymentmethod',\n]",
"_____no_output_____"
],
[
"dv = DictVectorizer(sparse=False)\n\ntrain_dict = df_train[categorical + numerical].to_dict(orient='records')\nX_train = dv.fit_transform(train_dict)\n\nmodel = LogisticRegression()\nmodel.fit(X_train, y_train)",
"_____no_output_____"
],
[
"val_dict = df_val[categorical + numerical].to_dict(orient='records')\nX_val = dv.transform(val_dict)\n\ny_pred = model.predict_proba(X_val)[:, 1]\nchurn_decision = (y_pred >= 0.5)\n(y_val == churn_decision).mean()",
"_____no_output_____"
]
],
[
[
"## 4.2 Accuracy and dummy model\n\n* Evaluate the model on different thresholds\n* Check the accuracy of dummy baselines",
"_____no_output_____"
]
],
[
[
"len(y_val)",
"_____no_output_____"
],
[
"(y_val == churn_decision).mean()",
"_____no_output_____"
],
[
"1132/ 1409",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score",
"_____no_output_____"
],
[
"accuracy_score(y_val, y_pred >= 0.5)",
"_____no_output_____"
],
[
"thresholds = np.linspace(0, 1, 21)\n\nscores = []\n\nfor t in thresholds:\n score = accuracy_score(y_val, y_pred >= t)\n print('%.2f %.3f' % (t, score))\n scores.append(score)",
"0.00 0.274\n0.05 0.509\n0.10 0.591\n0.15 0.666\n0.20 0.710\n0.25 0.739\n0.30 0.760\n0.35 0.772\n0.40 0.785\n0.45 0.793\n0.50 0.803\n0.55 0.801\n0.60 0.795\n0.65 0.786\n0.70 0.765\n0.75 0.744\n0.80 0.735\n0.85 0.726\n0.90 0.726\n0.95 0.726\n1.00 0.726\n"
],
[
"plt.plot(thresholds, scores)",
"_____no_output_____"
],
[
"from collections import Counter",
"_____no_output_____"
],
[
"Counter(y_pred >= 1.0)",
"_____no_output_____"
],
[
"1 - y_val.mean()",
"_____no_output_____"
]
],
[
[
"## 4.3 Confusion table\n\n* Different types of errors and correct decisions\n* Arranging them in a table",
"_____no_output_____"
]
],
[
[
"actual_positive = (y_val == 1)\nactual_negative = (y_val == 0)",
"_____no_output_____"
],
[
"t = 0.5\npredict_positive = (y_pred >= t)\npredict_negative = (y_pred < t)",
"_____no_output_____"
],
[
"tp = (predict_positive & actual_positive).sum()\ntn = (predict_negative & actual_negative).sum()\n\nfp = (predict_positive & actual_negative).sum()\nfn = (predict_negative & actual_positive).sum()",
"_____no_output_____"
],
[
"confusion_matrix = np.array([\n [tn, fp],\n [fn, tp]\n])\nconfusion_matrix",
"_____no_output_____"
],
[
"(confusion_matrix / confusion_matrix.sum()).round(2)",
"_____no_output_____"
]
],
[
[
"## 4.4 Precision and Recall",
"_____no_output_____"
]
],
[
[
"p = tp / (tp + fp)\np",
"_____no_output_____"
],
[
"r = tp / (tp + fn)\nr",
"_____no_output_____"
]
],
[
[
"## 4.5 ROC Curves\n\n### TPR and FRP",
"_____no_output_____"
]
],
[
[
"tpr = tp / (tp + fn)\ntpr",
"_____no_output_____"
],
[
"fpr = fp / (fp + tn)\nfpr",
"_____no_output_____"
],
[
"scores = []\n\nthresholds = np.linspace(0, 1, 101)\n\nfor t in thresholds:\n actual_positive = (y_val == 1)\n actual_negative = (y_val == 0)\n \n predict_positive = (y_pred >= t)\n predict_negative = (y_pred < t)\n\n tp = (predict_positive & actual_positive).sum()\n tn = (predict_negative & actual_negative).sum()\n\n fp = (predict_positive & actual_negative).sum()\n fn = (predict_negative & actual_positive).sum()\n \n scores.append((t, tp, fp, fn, tn))",
"_____no_output_____"
],
[
"columns = ['threshold', 'tp', 'fp', 'fn', 'tn']\ndf_scores = pd.DataFrame(scores, columns=columns)\n\ndf_scores['tpr'] = df_scores.tp / (df_scores.tp + df_scores.fn)\ndf_scores['fpr'] = df_scores.fp / (df_scores.fp + df_scores.tn)",
"_____no_output_____"
],
[
"plt.plot(df_scores.threshold, df_scores['tpr'], label='TPR')\nplt.plot(df_scores.threshold, df_scores['fpr'], label='FPR')\nplt.legend()",
"_____no_output_____"
]
],
[
[
"### Random model",
"_____no_output_____"
]
],
[
[
"np.random.seed(1)\ny_rand = np.random.uniform(0, 1, size=len(y_val))",
"_____no_output_____"
],
[
"((y_rand >= 0.5) == y_val).mean()",
"_____no_output_____"
],
[
"def tpr_fpr_dataframe(y_val, y_pred):\n scores = []\n\n thresholds = np.linspace(0, 1, 101)\n\n for t in thresholds:\n actual_positive = (y_val == 1)\n actual_negative = (y_val == 0)\n\n predict_positive = (y_pred >= t)\n predict_negative = (y_pred < t)\n\n tp = (predict_positive & actual_positive).sum()\n tn = (predict_negative & actual_negative).sum()\n\n fp = (predict_positive & actual_negative).sum()\n fn = (predict_negative & actual_positive).sum()\n\n scores.append((t, tp, fp, fn, tn))\n\n columns = ['threshold', 'tp', 'fp', 'fn', 'tn']\n df_scores = pd.DataFrame(scores, columns=columns)\n\n df_scores['tpr'] = df_scores.tp / (df_scores.tp + df_scores.fn)\n df_scores['fpr'] = df_scores.fp / (df_scores.fp + df_scores.tn)\n \n return df_scores",
"_____no_output_____"
],
[
"df_rand = tpr_fpr_dataframe(y_val, y_rand)",
"_____no_output_____"
],
[
"plt.plot(df_rand.threshold, df_rand['tpr'], label='TPR')\nplt.plot(df_rand.threshold, df_rand['fpr'], label='FPR')\nplt.legend()",
"_____no_output_____"
]
],
[
[
"### Ideal model",
"_____no_output_____"
]
],
[
[
"num_neg = (y_val == 0).sum()\nnum_pos = (y_val == 1).sum()\nnum_neg, num_pos",
"_____no_output_____"
],
[
"\ny_ideal = np.repeat([0, 1], [num_neg, num_pos])\ny_ideal\n\ny_ideal_pred = np.linspace(0, 1, len(y_val))",
"_____no_output_____"
],
[
"1 - y_val.mean()",
"_____no_output_____"
],
[
"accuracy_score(y_ideal, y_ideal_pred >= 0.726)",
"_____no_output_____"
],
[
"df_ideal = tpr_fpr_dataframe(y_ideal, y_ideal_pred)\ndf_ideal[::10]",
"_____no_output_____"
],
[
"plt.plot(df_ideal.threshold, df_ideal['tpr'], label='TPR')\nplt.plot(df_ideal.threshold, df_ideal['fpr'], label='FPR')\nplt.legend()",
"_____no_output_____"
]
],
[
[
"### Putting everything together",
"_____no_output_____"
]
],
[
[
"plt.plot(df_scores.threshold, df_scores['tpr'], label='TPR', color='black')\nplt.plot(df_scores.threshold, df_scores['fpr'], label='FPR', color='blue')\n\nplt.plot(df_ideal.threshold, df_ideal['tpr'], label='TPR ideal')\nplt.plot(df_ideal.threshold, df_ideal['fpr'], label='FPR ideal')\n\n# plt.plot(df_rand.threshold, df_rand['tpr'], label='TPR random', color='grey')\n# plt.plot(df_rand.threshold, df_rand['fpr'], label='FPR random', color='grey')\n\nplt.legend()",
"_____no_output_____"
],
[
"plt.figure(figsize=(5, 5))\n\nplt.plot(df_scores.fpr, df_scores.tpr, label='Model')\nplt.plot([0, 1], [0, 1], label='Random', linestyle='--')\n\nplt.xlabel('FPR')\nplt.ylabel('TPR')\n\nplt.legend()",
"_____no_output_____"
],
[
"from sklearn.metrics import roc_curve",
"_____no_output_____"
],
[
"fpr, tpr, thresholds = roc_curve(y_val, y_pred)",
"_____no_output_____"
],
[
"plt.figure(figsize=(5, 5))\n\nplt.plot(fpr, tpr, label='Model')\nplt.plot([0, 1], [0, 1], label='Random', linestyle='--')\n\nplt.xlabel('FPR')\nplt.ylabel('TPR')\n\nplt.legend()",
"_____no_output_____"
]
],
[
[
"## 4.6 ROC AUC\n\n* Area under the ROC curve - useful metric\n* Interpretation of AUC",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import auc",
"_____no_output_____"
],
[
"auc(fpr, tpr)",
"_____no_output_____"
],
[
"auc(df_scores.fpr, df_scores.tpr)",
"_____no_output_____"
],
[
"auc(df_ideal.fpr, df_ideal.tpr)",
"_____no_output_____"
],
[
"fpr, tpr, thresholds = roc_curve(y_val, y_pred)\nauc(fpr, tpr)",
"_____no_output_____"
],
[
"from sklearn.metrics import roc_auc_score",
"_____no_output_____"
],
[
"roc_auc_score(y_val, y_pred)",
"_____no_output_____"
],
[
"neg = y_pred[y_val == 0]\npos = y_pred[y_val == 1]",
"_____no_output_____"
],
[
"import random",
"_____no_output_____"
],
[
"n = 100000\nsuccess = 0 \n\nfor i in range(n):\n pos_ind = random.randint(0, len(pos) - 1)\n neg_ind = random.randint(0, len(neg) - 1)\n\n if pos[pos_ind] > neg[neg_ind]:\n success = success + 1\n\nsuccess / n",
"_____no_output_____"
],
[
"n = 50000\n\nnp.random.seed(1)\npos_ind = np.random.randint(0, len(pos), size=n)\nneg_ind = np.random.randint(0, len(neg), size=n)\n\n(pos[pos_ind] > neg[neg_ind]).mean()",
"_____no_output_____"
]
],
[
[
"## 4.7 Cross-Validation\n\n* Evaluating the same model on different subsets of data\n* Getting the average prediction and the spread within predictions",
"_____no_output_____"
]
],
[
[
"def train(df_train, y_train, C=1.0):\n dicts = df_train[categorical + numerical].to_dict(orient='records')\n\n dv = DictVectorizer(sparse=False)\n X_train = dv.fit_transform(dicts)\n\n model = LogisticRegression(C=C, max_iter=1000)\n model.fit(X_train, y_train)\n \n return dv, model",
"_____no_output_____"
],
[
"dv, model = train(df_train, y_train, C=0.001)",
"_____no_output_____"
],
[
"def predict(df, dv, model):\n dicts = df[categorical + numerical].to_dict(orient='records')\n\n X = dv.transform(dicts)\n y_pred = model.predict_proba(X)[:, 1]\n\n return y_pred",
"_____no_output_____"
],
[
"y_pred = predict(df_val, dv, model)",
"_____no_output_____"
],
[
"from sklearn.model_selection import KFold",
"_____no_output_____"
],
[
"!pip install tqdm",
"Requirement already satisfied: tqdm in /Users/ziri/anaconda3/lib/python3.7/site-packages (4.62.3)\n"
],
[
"from tqdm.auto import tqdm",
"_____no_output_____"
],
[
"n_splits = 5\n\n# C = regularization parameter for the model\n# tqdm() is a function that prints progress bars\nfor C in tqdm([0.001, 0.01, 0.1, 0.5, 1, 5, 10]):\n kfold = KFold(n_splits=n_splits, shuffle=True, random_state=1)\n\n scores = []\n\n for train_idx, val_idx in kfold.split(df_full_train):\n df_train = df_full_train.iloc[train_idx]\n df_val = df_full_train.iloc[val_idx]\n\n y_train = df_train.churn.values\n y_val = df_val.churn.values\n\n dv, model = train(df_train, y_train, C=C)\n y_pred = predict(df_val, dv, model)\n\n auc = roc_auc_score(y_val, y_pred)\n scores.append(auc)\n\n print('C=%s %.3f +- %.3f' % (C, np.mean(scores), np.std(scores)))",
" 14%|█▍ | 1/7 [00:01<00:07, 1.18s/it]"
],
[
"scores",
"_____no_output_____"
],
[
"dv, model = train(df_full_train, df_full_train.churn.values, C=1.0)\ny_pred = predict(df_test, dv, model)\n\nauc = roc_auc_score(y_test, y_pred)\nauc",
"_____no_output_____"
]
],
[
[
"## 4.8 Summary\n\n* Metric - a single number that describes the performance of a model\n* Accuracy - fraction of correct answers; sometimes misleading \n* Precision and recall are less misleading when we have class inbalance\n* ROC Curve - a way to evaluate the performance at all thresholds; okay to use with imbalance\n* K-Fold CV - more reliable estimate for performance (mean + std)",
"_____no_output_____"
],
[
"## 4.9 Explore more\n\n* Check the precision and recall of the dummy classifier that always predict \"FALSE\"\n* F1 score = 2 * P * R / (P + R)\n* Evaluate precision and recall at different thresholds, plot P vs R - this way you'll get the precision/recall curve (similar to ROC curve)\n* Area under the PR curve is also a useful metric\n\nOther projects:\n\n* Calculate the metrics for datasets from the previous week",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d0c495ed71bbfa3d706c76813bb4e1158ba1f61f | 10,499 | ipynb | Jupyter Notebook | notebooks/Milestone3.ipynb | UBC-MDS/525-group27 | adcd2dae51624e43d0286402077372e7014b461e | [
"MIT"
] | null | null | null | notebooks/Milestone3.ipynb | UBC-MDS/525-group27 | adcd2dae51624e43d0286402077372e7014b461e | [
"MIT"
] | 6 | 2022-03-29T21:49:01.000Z | 2022-03-29T21:55:18.000Z | notebooks/Milestone3.ipynb | UBC-MDS/525-group27 | adcd2dae51624e43d0286402077372e7014b461e | [
"MIT"
] | null | null | null | 42.678862 | 457 | 0.643204 | [
[
[
"# DSCI 525 - Web and Cloud Computing\n\n***Milestone 3:*** This milestone aims to set up your spark cluster and develop your machine learning to deploy in the cloud for the next milestone.\n\n## Milestone 3 checklist :\n- [ ] Setup your EMR cluster with Spark, Hadoop, JupyterEnterpriseGateway, JupyterHub 1.1.0, and Livy. \n- [ ] Make sure you set up foxy proxy for your web browser(Firefox). Probably you already set this up from the previous milestone.\n- [ ] Develop a ML model using scikit-learn. (We will be using this model to deploy for our next milestone.)\n- [ ] Obtain the best hyperparameter settings using spark's MLlib.\n\n**Keep in mind:**\n\n- _Please use the Firefox browser for this milestone. Make sure you got foxy proxy setup._\n\n- _All services you use are in region us-west-2 region._\n\n- _Use only default VPC and subnet, if not specified explicitly in instruction, leave all other options default when setting up your cluster._\n \n- _No IP addresses are visible when you provide the screenshot (***Please mask it before uploading***)._\n\n- _1 node cluster with a single master node (zero slave nodes) of size ```m5.xlarge``` is good enough for your spark MLlib process. These configurations might take 15 - 20 minutes to get optimal tuning parameters for the entire dataset._\n\n- _Say something went wrong and you want to spin up another EMR cluster, then make sure you terminate the previous one._\n\n- _Upon termination, stored data in your cluster will be lost. Make sure you save any data to S3 and download the notebooks to your laptop so that next time you have your jupyterHub in a different cluster, you can upload your notebook there._\n\n_***Outside of Milestone [OPTIONAL]:*** You are encouraged to practice it yourself by spinning up EMR clusters._\n\n***VERY IMPORTANT:*** With task 4, make sure you occasionally download the notebook to your local computer. Once the lab is stopped after 3 hours, your EMR cluster will be terminated, and everything will vanish.",
"_____no_output_____"
],
[
"\n",
"_____no_output_____"
],
[
"### 1. Setup your EMR cluster\nrubric={correctness:25}",
"_____no_output_____"
],
[
"Follow the instructions shown during the lecture to set up your EMR cluster. I am adding instructions here again for guidance.\n\n1.1) Go to advanced options.\n\n1.2) Choose Release 6.5.0.\n\n1.3) Check Spark, Hadoop, JupyterEnterpriseGateway, JupyterHub 1.1.0, and Livy. \n\n1.4) Core instances to be 0, master 1.\n \n1.5) By default, the instance will be selected as m5.xlarge. However, you can also choose a bigger instance (e.g., m4.4xlarge, but make sure you budget )\n\n1.6) Cluster name : Your-group-number.\n \n1.7) Uncheck Enable auto-termination.\n \n1.8) Select the key pair you have access to (from your milestone 2).\n\n1.9) EC2 security group, please go with the default. Remember, this is a managed service; what we learned from the shared responsibility model so that AWS will take care of many things. EMR comes in the list of container services. Check [this]( https://aws.amazon.com/blogs/industries/applying-the-aws-shared-responsibility-model-to-your-gxp-solution/).\n \n1.10) Wait for the cluster to start. This takes around ~15 min. Once it is ready, you will see a solid green dot. ",
"_____no_output_____"
],
[
"#### Please attach this screen shots from your group for grading\n\n\nhttps://github.com/UBC-MDS/525-group27/blob/main/notebooks/image/m3_1.png",
"_____no_output_____"
],
[
"### 2. Setup your browser , jupyter environment & connect to the master node.\nrubric={correctness:25}",
"_____no_output_____"
],
[
"2.1) Under cluster ```summary > Application user interfaces > On-cluster user interfaces```: Click on _***Enable an SSH Connection***_.\n\n2.2) From instructions in the popup from Step 2.1, use: **Step 1: Open an SSH Tunnel to the Amazon EMR Master Node.** Remember you are running this from your laptop terminal, and after running, it will look like [this](https://github.ubc.ca/mds-2021-22/DSCI_525_web-cloud-comp_students/blob/master/release/milestone3/images/eg.png). For the private key make sure you point to the correct location in your computer.\n\n2.3) (If you haven't done so from milestone 2) From instructions in the popup from Step 2.1, please ignore **Step 2: Configure a proxy management tool**. Instead follow instructions given [here](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-connect-master-node-proxy.html), under section **Example: Configure FoxyProxy for Firefox:**. Get foxy proxy standard [here](https://addons.mozilla.org/en-CA/firefox/addon/foxyproxy-standard/) \n\n2.4) Move to **application user interfaces** tab, use the jupytetHub URL to access it.\n\n2.4.1) Username: ```jovyan```, Password: ```jupyter```. These are default more details [here](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-jupyterhub-user-access.html)\n\n2.5) Login into the master node from your laptop terminal (```cluster summary > Connect to the Master Node Using SSH```), and install the necessary packages. Here are the needed packages based on my solution; you might have to install other packages depending on your approach.\n\n sudo yum install python3-devel\n sudo pip3 install pandas\n sudo pip3 install s3fs\n\n**IMPORTANT:** \n- Make sure ssh -i ~/ggeorgeAD.pem -ND 8157 [email protected] (Step 2.2) is running in your terminal window before trying to access your jupyter URL. Sometimes the connection might lose; in that case, run that step again to access your jupyterHub.\n- Don't confuse Step 2.2 and Step 2.5. In 2.2, you open an ssh tunnel to access the jupyterHub URL. With Step 2.6, you log into the master node to install the necessary packages.",
"_____no_output_____"
],
[
"#### Please attach this screen shots from your group for grading\n\nhttps://github.com/UBC-MDS/525-group27/blob/main/notebooks/image/m3_2.png",
"_____no_output_____"
],
[
"\n",
"_____no_output_____"
],
[
"### 3. Develop a ML model using scikit-learn.\nrubric={correctness:25}",
"_____no_output_____"
],
[
"You can either use the setup that we have from our last milestone. But it might have been shut down by AWS due to the time limit; also, we haven't got permission from AWS to spin up instances larger than t2.large. Considering the situation, I recommend doing this on your local computer. So upload this notebook to your local jupyter notebook and follow the instructions.\nhttps://github.ubc.ca/mds-2021-22/DSCI_525_web-cloud-comp_students/blob/master/release/milestone3/Milestone3-Task3.ipynb\n\nThere are 2 parts to this notebook; For doing part 2, you want information from Task 4.",
"_____no_output_____"
],
[
"#### Please attach this screen shots from your group for grading\n\nhttps://github.com/UBC-MDS/525-group27/blob/main/notebooks/image/m3_3.png",
"_____no_output_____"
],
[
"\n",
"_____no_output_____"
],
[
"### 4. Obtain best hyperparameter settings using spark's MLlib.\nrubric={correctness:20}",
"_____no_output_____"
],
[
"Upload this notebook to your jupyterHub (AWS managed jupyterHub in the cluster) you set up in Task 2 and follow the instructions given in the notebook.\nhttps://github.ubc.ca/mds-2021-22/DSCI_525_web-cloud-comp_students/blob/master/release/milestone3/Milestone3-Task4.ipynb",
"_____no_output_____"
],
[
"### 5. Submission instructions\nrubric={mechanics:5}\n\n***SUBMISSION:*** Please put a link to your GitHub folder in the canvas where TAs can find the following-\n- [ ] Python 3 notebook, with the code for ML model in scikit-learn. (You can develop this on your local computer)\n- [ ] PySpark notebook, with the code for obtaining the best hyperparameter settings. ( For this, you have to use PySpark notebook(kernal) in your EMR cluster )\n- [ ] Screenshot from \n - [ ] Setup your EMR cluster (Task 1).\n - [ ] Setup your browser, jupyter environment & connect to the master node (Task 2). \n - [ ] Your S3 bucket showing ```model.joblib``` file. (From Task 3 Develop a ML model using scikit-learn)",
"_____no_output_____"
]
]
] | [
"markdown"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
d0c4a69a8845a8ea910dad591cb8a7363d3077a8 | 23,159 | ipynb | Jupyter Notebook | Prospect_Theory_Agent_Demo.ipynb | cognitionswitch/decisionscience | ef6e3363dc87b682853c7e23be32d9224ee366b6 | [
"MIT"
] | null | null | null | Prospect_Theory_Agent_Demo.ipynb | cognitionswitch/decisionscience | ef6e3363dc87b682853c7e23be32d9224ee366b6 | [
"MIT"
] | null | null | null | Prospect_Theory_Agent_Demo.ipynb | cognitionswitch/decisionscience | ef6e3363dc87b682853c7e23be32d9224ee366b6 | [
"MIT"
] | 1 | 2022-02-07T09:43:33.000Z | 2022-02-07T09:43:33.000Z | 33.515195 | 578 | 0.577227 | [
[
[
"# Prospect Theory and Cumulative Prospect Theory Agent Demo",
"_____no_output_____"
],
[
"The PTAgent and CPTAgent classes reproduce patterns of choice behavior described by Kahneman & Tverski's survey data in their seminal papers on Prospect Theory and Cumulative Prospect Theory. These classes expresses valuations of single lottery inputs, or express preferences between two lottery inputs. To more explicitly describe these agent classes, we define the following:\n\n1. $(x_1, p_1; \\cdots; x_n, p_n)$: a lottery offering outcome $x_1$ with probability $p_1$, ..., outcome $x_n$ with probability $p_n$. \n2. $v(x)$: the internal representation of the value of an outcome $x$ to an instance of a PTAgent.\n3. $\\pi(p)$: the internal representation of a probability $p$ to an instance of a PTAgent. \n4. $V(x_1, p_1; \\cdots; x_n, p_n)$: a lottery valuation function.",
"_____no_output_____"
],
[
"#### **Prospect Theory Agent**\n\nThe PTAgent class reflects the lottery valuation function of Prospect Theory described in Kahneman & Tverski (1979). Generally, the lottery valuation function operates as follows: \n\n$$V(x_1, p_1; \\dots; x_n, p_n) = v(x_1) \\times \\pi(p_1) + \\cdots + v(x_n) \\times \\pi(p_n) \\tag{1a}$$\n\nHowever, under certain conditions the lottery valuation function is operates under a different formulation. These conditions are:\n\n1. When the lottery contains exactly two non-zero outcomes and one zero outcome relative to a reference point, with each of these outcomes occuring with non-zero probability; ie., $p_1 + p_2 + p_3 = 1$ for $x_1, x_2 \\in \\lbrace x | x \\ne 0 \\rbrace$ and $x_3=0$.\n2. When the outcomes are both positive relative to a reference point or both negative relative to a reference point. Explicitly, $x_2 < x_1 < 0$ or $x_2 > x_1 > 0$.\n\nWhen a lottery satisfies the conditions above, the lottery valuation function becomes:\n\n$$V(x_1, p_1; x_2, p_2) = x_1 + p_2(x_2 - x_1) \\tag{1b}$$\n\nSince the original account of prospect theory does not explicitly describe the value function or weighting function, the value function uses the same function proposed in Tverski & Kahneman (1992):\n\n $$v(x) = \\begin{equation}\n\\left\\{ \n \\begin{aligned}\n x^\\alpha& \\;\\; \\text{if} \\, x \\ge 0\\\\\n -\\lambda (-x)^\\beta& \\;\\; \\text{if} \\, x \\lt 0\\\\\n \\end{aligned}\n \\right.\n\\end{equation} \\tag{2}$$\n\nWhile the weighting function uses a form described here: https://sites.duke.edu/econ206_01_s2011/files/2011/04/39b-Prospect-Theory-Kahnemann-Tversky_final2-1.pdf.\n\n$$\\pi(p) = exp(-(-ln(p))^\\gamma) \\tag{3}$$",
"_____no_output_____"
],
[
"#### **Cumulative Prospect Theory Agent**\n\nThe CPTAgent class reflects the lottery valuation function, value function, and weighting function described in Tverski & Kahneman (1992). The CPTAgent class also incorporates capacities as described in this same paper. For Cumulative Prospect Theory, outcomes and associated probabilities include the attribute of *valence* that reflects whether the realization of an outcome would increases or decreases value from a reference point of the agent. \n\nThe value function for positive and negative outcomes is shown in equation 2 above.\n\nFor probabilities $p$ associated with positive valence outcomes, the *capacity* function is expressed as:\n$$w^{+}(p) = \\frac{p^\\gamma}{\\left(p^\\gamma+(1-p)^\\gamma) \\right)^{1/ \\gamma}} \\tag{4a}$$\n\nFor probabilities $p$ associated with negative valence outcomes, the capacity function is expressed similarly as:\n$$w^{-}(p) = \\frac{p^\\delta}{\\left(p^\\delta+(1-p)^\\delta) \\right)^{1/ \\delta}} \\tag{4b}$$\n\nIn order to compute a weight for the $i^{th}$ outcome with positive valence, a difference of cumulative sums is computed as follows:\n\n$$\\pi^{+}(p_i) = w^{+}(p_i + \\cdots + p_n) - w^{+}(p_{i+1} + \\cdots + p_n), \\; 0 \\le x_i < \\cdots < x_n \\tag{5a}$$\n\nSimilarly, computing a weight for the $j^{th}$ outcome with negative valence:\n\n$$\\pi^{-}(p_j) = w^{-}(p_j + \\cdots + p_m) - w^{-}(p_{j+1} + \\cdots + p_m), \\; 0 \\gt x_j > \\cdots > x_m \\tag{5b}$$\n\nLottery valuations for Cumulative Prospect Theory are then computed in a similar manner as Prospect Theory (equation 1a). ",
"_____no_output_____"
],
[
"---",
"_____no_output_____"
],
[
"## Choice Behavior for Lotteries\n\n#### **Normative Choice Behavior**\n\nSpecification of the following parameters leads to an agent that chooses lotteries according to Expected Utility Theory:\n- $\\alpha = \\beta = 1$\n- $\\gamma = \\delta = 1$\n- $\\lambda = 1$\n\n#### **Descriptive Choice Behavior**\n\nWhen $\\alpha, \\beta, \\gamma, \\delta$ take values on the interval $(0, 1)$, and when $\\lambda > 1$, lottery valuation functions with constituent value and weighting functions show patterns of choice that better approximate empirical choice behavior than those predicted by normative choice behavior.\n\n#### **Notation**\n\nTo illustrate functionality of the PTAgent and CPTAgent classes, we denote an outcome and its associated probability as a tuple $(G_1, p_1)$ and $(L_1, p_1)$, where $G_1$ is used to denote gains and $L_1$ denotes losses. A lottery is a set of gains and/or losses with associated probabilities: $[(L_1, p_1), \\cdots, (G_n, p_n)]$, where $\\sum p_i = 1$. A preference between two prospect, for example \"A is prefered to B\", is denoted as $A > B$. \n\nThe following instance of PTAgent uses function parameters estimated in Tverski & Kahneman (1992). These parameters are sufficient to replicate observed modal choices between prospects in (Kahneman & Tverski, 1992) and (Tverski & Kahneman, 1992).",
"_____no_output_____"
],
[
"---",
"_____no_output_____"
],
[
"### Decision Anomalies\n\nThe demonstrations below show instances of the PTAgent class exhibiting the same choice anomalies discussed in Kahneman & Tverskies seminal paper on Prospect Theory (1979).",
"_____no_output_____"
]
],
[
[
"from cpt_agent import PTAgent",
"_____no_output_____"
],
[
"pt = PTAgent(alpha=0.88, gamma=0.61, lambda_=2.25)\npt",
"_____no_output_____"
]
],
[
[
"### The certainty effect\n\nThe certainty effect demonstrates that reducing the probability of outcomes from certainty has larger effects on preferences than equivalent reductions from risky (ie., non-certain) outcomes. Problems 1 and 2 illustrate this effect for absolute reductions in probabilities and problems 3 and 4 show this effect for relative reductions in probabilities. ",
"_____no_output_____"
],
[
"- Problem 1: $[(G_1, p_1), (G_2, p_2), (0, p_3)] < [(G_2, 1)]$\n- Problem 2: $[(G_1, p_1), (G_2, 0), (0, p_3)] > [(G_2, 1-p_2)]$\n\nSubtracting probability $p_2$ of outcome $G_2$ from both options in problem 1 leads to a preference reversal in problem 2.",
"_____no_output_____"
]
],
[
[
"# Problem 1\nlottery_1A = {'outcome':[2500, 2400, 0], 'probability':[0.33, 0.66, 0.01]}\nlottery_1B = {'outcome':[2400], 'probability':[1]}\n\npt.choose(lottery_1A, lottery_1B)",
"_____no_output_____"
],
[
"# Problem 2\nlottery_2C = {'outcome':[2500, 0], 'probability':[0.33, 0.67]}\nlottery_2D = {'outcome':[2400, 0], 'probability':[0.34, 0.66]}\n\npt.choose(lottery_2C, lottery_2D)",
"_____no_output_____"
]
],
[
[
"Scaling probabilities of risky outcome $G_1$ and certain outcome $G_2$ by $p'$ in problem 3 leads to a preference reversal in problem 4. This preference reversal violates the substitution axiom of Expected Utility Theory.\n\n- Problem 3: $[(G_1, p_1), (0, 1-p_1)] < [(G_2, 1)]$\n- Problem 4: $\\left[\\left(G_1, p_1\\cdot p'\\right), \\left(0, \\frac{1-p_1}{p'}\\right)\\right] > [(G_2, p'), (0, 1-p')]$",
"_____no_output_____"
]
],
[
[
"# Problem 3\nlottery_3A = {'outcome':[4000, 0], 'probability':[0.8, 0.2]}\nlottery_3B = {'outcome':[3000], 'probability':[1]}\n\npt.choose(lottery_3A, lottery_3B)",
"_____no_output_____"
],
[
"# Problem 4\nlottery_4C = {'outcome':[4000, 0], 'probability':[0.2, 0.8]}\nlottery_4D = {'outcome':[3000, 0], 'probability':[0.25, 0.75]}\n\npt.choose(lottery_4C, lottery_4D)",
"_____no_output_____"
]
],
[
[
"### The reflection effect\n\nThe reflection effect demonstrates that altering outcomes by recasting prospects from the domain of gains to losses will correspondingly alter decision behavior from risk-aversion to risk-seeking. Since the reflection effect highlights preferences characterized as risk-seeking in the loss domain, the effect disqualifies risk-aversion as a general principle for explaining the certainty effect above. ",
"_____no_output_____"
],
[
"- Problem 3: $[(G_1, p_1), (0, 1-p_1)] < [(G_2, 1)]$\n- Problem 3': $[(-G_1, p_1), (0, 1-p_1)] > [(-G_2, 1)]$",
"_____no_output_____"
]
],
[
[
"# Problem 3'\nlottery_3A_, lottery_3B_ = lottery_3A.copy(), lottery_3B.copy()\nlottery_3A_.update({'outcome':[-g for g in lottery_3A_['outcome']]})\nlottery_3B_.update({'outcome':[-g for g in lottery_3B_['outcome']]})\n\npt.choose(lottery_3A_, lottery_3B_)",
"_____no_output_____"
]
],
[
[
"- Problem 4: $\\left[\\left(G_1, p_1\\cdot p^{*}\\right), \\left(0, \\frac{1-p_1}{p^{*}}\\right)\\right] > [(G_2, p^{*}), (0, 1-p^{*})]$\n- Problem 4': $\\left[\\left(-G_1, p_1\\cdot p^{*}\\right), \\left(0, \\frac{1-p_1}{p^{*}}\\right)\\right] < [(-G_2, p^{*}), (0, 1-p^{*})]$",
"_____no_output_____"
]
],
[
[
"# Problem 4'\nlottery_4C_, lottery_4D_ = lottery_4C.copy(), lottery_4D.copy()\nlottery_4C_.update({'outcome':[-g for g in lottery_4C_['outcome']]})\nlottery_4D_.update({'outcome':[-g for g in lottery_4D_['outcome']]})\n\npt.choose(lottery_4C_, lottery_4D_)",
"_____no_output_____"
]
],
[
[
"### Risk Seeking in Gains, Risk Aversion in Losses\n\nIn addition to violations of the substitution axiom, scaling probabilities of lotteries with a result of highly improbable outcomes can induce risk seeking in gains, and risk aversion in losses. While these characteristics of choice behavior are not violations of normative theories of choice behavior, they contrast with more typical observations of risk aversion in gains and risk seeking in losses for outcomes that occur with stronger likelihood. In the domain of gains, risk seeking for low probability events seems to correspond to the popularity of state lotteries.",
"_____no_output_____"
],
[
"- Problem 7: $[(G_1, p_1), (0, 1-p_1)] < [(G_2, p_2), (0, 1-p_2)]$\n- Problem 8: $\\left[\\left(G_1, p_1\\cdot p'\\right), \\left(0, \\frac{1-p_1}{p'}\\right)\\right] > \\left[\\left(G_2, p_2\\cdot p'\\right), \\left(0, \\frac{1-p_2}{p'}\\right)\\right]$",
"_____no_output_____"
]
],
[
[
"# Problem 7\nlottery_7A = {'outcome':[6000, 0], 'probability':[0.45, 0.55]}\nlottery_7B = {'outcome':[3000, 0], 'probability':[0.9, 0.1]}\n\npt.choose(lottery_7A, lottery_7B)",
"_____no_output_____"
],
[
"# Problem 8\nlottery_8C = {'outcome':[6000, 0], 'probability':[0.001, 0.999]}\nlottery_8D = {'outcome':[3000, 0], 'probability':[0.002, 0.998]}\n\npt.choose(lottery_8C, lottery_8D)",
"_____no_output_____"
]
],
[
[
"Just as Prospect Theory accounts for risk seeking in gains for low probability events, the theory also accounts for risk aversion in the domain of losses when outcomes occur very infrequently. Risk aversion in the domain of losses seems to match well with consumer purchase of insurance products.",
"_____no_output_____"
]
],
[
[
"# Problem 7'\nlottery_7A_, lottery_7B_ = lottery_7A.copy(), lottery_7B.copy()\nlottery_7A_.update({'outcome':[-g for g in lottery_7A_['outcome']]})\nlottery_7B_.update({'outcome':[-g for g in lottery_7B_['outcome']]})\n\npt.choose(lottery_7A_, lottery_7B_)",
"_____no_output_____"
],
[
"# Problem 8'\nlottery_8C_, lottery_8D_ = lottery_8C.copy(), lottery_8D.copy()\nlottery_8C_.update({'outcome':[-g for g in lottery_8C_['outcome']]})\nlottery_8D_.update({'outcome':[-g for g in lottery_8D_['outcome']]})\n\npt.choose(lottery_8D_, lottery_8D_)",
"_____no_output_____"
]
],
[
[
"### Probabilistic Insurance\n\nKahneman & Tverski discuss another frequent choice anomalie called *probabilistic insurance*. To demonstrate choice behavior matching this anomalie, we first need to find a point of indifference reflecting the following relationship between current wealth $w$ and the cost of an insurance premium $y$ against a potential loss $x$ that occurs with probability $p$:\n\n$$pu(w-x) + (1-p)u(w) = u(w-y) \\tag{6}$$\n\nThat is, we are finding the premium $y$ for which a respondent is ambivelant between purchasing the insurance against loss $x$, and simply incurring the loss $x$ with probability $p$. Kahneman & Tverski introduce an insurance product called probabilistic insurance whereby the consumer only purchases a portion $r$ of the premium $y$. If the event leading to loss actually occurs, the purchaser pays the remainder of the premium with probability $r$, or is returned the premium and suffers the loss entirely with probability $1-r$. \n\n$$(1-r) p u(w-x) + rpu(w-y) + (1-p)u(w-ry) \\tag{7}$$\n\nKahneman & Tverski show that according to Expected Utility Theory, probabilistic insurance is generally preferred to either a fully insured product $u(w-y)$ or a loss $x$ with probability $p$ (under the assumption of ambivalence described above). In surveys, however, respondents generally show a strong preference against probabilistic insurance.",
"_____no_output_____"
]
],
[
[
"# Problem 9\npremium = 1000\nasset_am = 6000\nloss = 5000\nprob_loss = 0.06925\n\nlottery_9A = {'outcome':[asset_am - premium], 'probability':[1]}\nlottery_9B = {'outcome':[asset_am - loss, asset_am], 'probability':[prob_loss, 1-prob_loss]}",
"_____no_output_____"
],
[
"pt.evaluate(lottery_9A)",
"_____no_output_____"
],
[
"pt.evaluate(lottery_9B)",
"_____no_output_____"
],
[
"# Problem 10\nr = 0.94\n\nlottery_10A = {'outcome':[asset_am - loss, asset_am - premium, asset_am - r*premium], \n 'probability':[(1-r)*prob_loss, r*prob_loss, (1-prob_loss)]}",
"_____no_output_____"
],
[
"pt.choose(lottery_9B, lottery_10A)",
"_____no_output_____"
]
],
[
[
"### Cumulative Prospect Theory\n\nKahneman & Tverski modified their original account of Prospect Theory with Cumulative Prospect Theory (1990). The CPTAgent exhibits the same choice behavior shown by the PTAgent for each of the problems considered above. Additionally, the cumulative features of the weighting function better demonstrates the choice patterns of respondents when considering probabilistic insurance, namely, the preference against probabilistic insurance seems to hold under a broader range of probabilities $r$. ",
"_____no_output_____"
]
],
[
[
"from cpt_agent import CPTAgent",
"_____no_output_____"
],
[
"cpt = CPTAgent(alpha=0.88, gamma=0.61, lambda_=2.25)\ncpt",
"_____no_output_____"
],
[
"# Problem 11\nr = 0.73\n\nlottery_10B = {'outcome':[asset_am - loss, asset_am - premium, asset_am - r*premium], \n 'probability':[(1-r)*prob_loss, r*prob_loss, (1-prob_loss)]}\n\ncpt.choose(lottery_9A, lottery_10B)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
d0c4ad45a14031b295e171b13888775cb8512a23 | 532,433 | ipynb | Jupyter Notebook | 03深度学习基础/3.02 神经网络的训练.ipynb | DrDavidS/basic_Machine_Learning | d6f6538a13ed68543569f595fa833e6d220beedd | [
"MIT"
] | 15 | 2019-09-12T01:04:44.000Z | 2022-01-01T02:30:51.000Z | 03深度学习基础/3.02 神经网络的训练.ipynb | DrDavidS/basic_Machine_Learning | d6f6538a13ed68543569f595fa833e6d220beedd | [
"MIT"
] | null | null | null | 03深度学习基础/3.02 神经网络的训练.ipynb | DrDavidS/basic_Machine_Learning | d6f6538a13ed68543569f595fa833e6d220beedd | [
"MIT"
] | 11 | 2019-10-19T03:12:03.000Z | 2021-01-07T05:14:01.000Z | 41.492597 | 231 | 0.492699 | [
[
[
"# 神经网络的训练\n\n作者:杨岱川\n\n时间:2019年12月\n\ngithub:https://github.com/DrDavidS/basic_Machine_Learning\n\n开源协议:[MIT](https://github.com/DrDavidS/basic_Machine_Learning/blob/master/LICENSE)\n\n参考文献:\n\n- 《深度学习入门》,作者:斋藤康毅;\n- 《深度学习》,作者:Ian Goodfellow 、Yoshua Bengio、Aaron Courville。\n- [Keras overview](https://tensorflow.google.cn/guide/keras/overview)\n\n## 本节目的\n\n在[3.01 神经网络与前向传播](https://github.com/DrDavidS/basic_Machine_Learning/blob/master/03深度学习基础/3.01%20神经网络与前向传播.ipynb)中我们学习了基于多层感知机的神经网络前向传播的原理,并且动手实现了一个很简单的神经网络模型。\n\n但是,目前为止我们搭建的神经网络的权重矩阵 $W$ 是随机初始化的,我们只能说把输入 $X$ “喂”了进去, 然后“跑通”了这个网络。但是它的输出并没有任何实际的意义,因为我们并没有对它进行训练。\n\n在 3.02 教程中,我们的主题就是**神经网络的学习**,也就是我们的神经网络是如何从训练数据中自动获取最优权重参数的过程,这个过程的主要思想和之前在传统机器学习中描述的训练本质相同。\n\n我们为了让神经网络能够进行学习,将导入**损失函数(loss function)**这一指标,相信大家对其并不陌生。\n\n神经网络学习的目的就是以损失函数为基准,找出能够使它的值达到最小的权重参数。而为了找出尽可能小的损失函数的值,我们将采用**梯度法**。\n\n> 这些名词是不是听起来都很熟悉? \n>\n>“梯度法”在[2.11 XGBoost原理与应用](https://github.com/DrDavidS/basic_Machine_Learning/blob/master/02机器学习基础/2.11%20XGBoost原理与应用.ipynb)中以**梯度提升**的形式出现,而“损失函数”更是贯穿了整个传统机器学习过程。",
"_____no_output_____"
],
[
"## 从数据中学习\n\n同其他机器学习算法一样,神经网络的特征仍然是可以从数据中学习。什么叫“从数据中学习”,就是说我们的权重参数可以由数据来自动决定。\n\n既然是机器学习,我们当然不能人工地决定参数,这样怎么忙得过来呢?\n\n>一些大型神经网络参数数量,当然参数更多不代表效果一定更好:\n>\n>- ALBERT:1200万,by 谷歌;\n>- BERT-large:3.34亿,by 谷歌;\n>- BERT-xlarge:12.7亿,by 谷歌;\n>- Megatron:80亿,by Nvidia;\n>- T5,110亿,by 谷歌。\n\n接下来我们会介绍神经网络地学习,也就是如何利用数据决定参数值。\n\n## 损失函数\n\n损失函数地概念大家都熟悉,我们在之前学过非常多的损失函数,比如 0-1 损失函数,均方误差损失函数等。这里我们会再介绍一种新的损失函数。\n\n### 交叉熵误差\n\n**交叉熵误差(cross entropy error)**是一种非常常用的损失函数,其公式如下:\n\n$$\\large E=-\\sum_k t_k\\log y_k$$\n\n其中,$\\log$ 是以 $\\rm e$ 为底数的自然对数 $\\log_e$。$k$ 表示共有 $k$ 个类别。$y_k$ 是神经网络的输出,$t_k$ 是真实的、正确的标签。$t_k$ 中只有正确解的标签索引为1,其他均为0,注意这里用的是 one-hot 表示,所以接受多分类问题。\n\n实际上这个公式只计算了正确解标签输出的自然对数。\n\n比如,一个三分类问题,有 A, B ,C 三种类别,而真实值为C,即 $t_k=[0,\\quad0,\\quad1]$,\n\n而神经网络经过 softmax 后的输出 $y_k=[0.1,\\quad0.3,\\quad0.6]$。所以其交叉熵误差为 $-\\log0.6\\approx0.51$。\n\n我们用代码来实现交叉熵:",
"_____no_output_____"
]
],
[
[
"import numpy as np",
"_____no_output_____"
],
[
"def cross_entropy_error(y, t):\n \"\"\"定义交叉熵损失函数\"\"\"\n delta = 1e-7\n return -np.sum(t * np.log(y + delta))",
"_____no_output_____"
]
],
[
[
"这里的 $y$ 和 $t$ 都是 NumPy 数组。我们在计算 `np.log` 的时候加上了一个很小的值 delta,是为了防止出现 `np.log(0)` 的情况,也就是返回值为负无穷。这样一来会导致后续计算无法进行。\n\n接下来我们试试使用代码进行简单的计算:",
"_____no_output_____"
]
],
[
[
"# 设置第三类为正确解\nt = np.array([0, 0, 1])\nt",
"_____no_output_____"
],
[
"# 设置三类概率情况,y1\ny1 = np.array([0.1, 0.3, 0.6])\ny1",
"_____no_output_____"
],
[
"# 设置三类概率情况,y2\ny2 = np.array([0.3, 0.4, 0.3])\ny2",
"_____no_output_____"
],
[
"# 计算y1交叉熵\ncross_entropy_error(y1, t)",
"_____no_output_____"
],
[
"# 计算y2交叉熵\ncross_entropy_error(y2, t)",
"_____no_output_____"
]
],
[
[
"可以看出第一个输出 y1 与监督数据(训练数据)更为切合,所以交叉熵误差更小。",
"_____no_output_____"
],
[
"### mini-batch 学习\n\n机器学习使用训练数据进行学习,我们对训练数据计算损失函数的值。找出让这个值尽可能小的参数。也就是说,计算损失函数的时候必须将所有的训练数据作为对象,有 100 个数据,就应当把这 100 个损失函数的总和作为学习的目标。\n\n要计算所有训练数据的损失函数的综合,以交叉熵误差为例:\n\n$$\\large E=-\\frac{1}{N}\\sum_n \\sum_k t_{nk}\\log y_{nk}$$\n\n虽然看起来复杂,其实只是把单个数据的损失函数扩展到了 $n$ 个数据而已,最后再除以 $N$,求得单个数据的“平均损失函数”。这样平均化以后,可以获得和训练数据的数量无关的统一指标。\n\n问题在于,很多数据集的数据量可不少,以 MNIST 为例,其训练数据有 60000 个,如果以全部数据为对象求损失函数的和,则时间花费较长。如果更大的数据集,比如 [ImageNet](http://www.image-net.org/about-stats) 数据集,甚至有1419万张图片(2019年12月),这种情况下以全部数据为对象计算损失函数是不现实的。\n\n因此,我们从全部数据中选出一部分,作为全部数据的“近似”。神经网络的学习也是从训练数据中选出一批数据(mini-batch,小批量),然后对每个mini-batch进行学习。\n\n比如在 MNIST 数据集中,每次选择 100 张图片学习。这种学习方式称为 **mini-batch学习**。或者说,整个训练过程的 batch-size 为 100。",
"_____no_output_____"
],
[
"### 为何要设定损失函数\n\n为什么我们训练过程是损失函数最小?我们的最终目的是提高神经网络的识别精度,为什么不把识别精度作为指标?\n\n这涉及到导数在神经网络学习中的作用。以后会详细解释,在神经网络的学习中,寻找最优参数(权重和偏置)时,要寻找使得损失函数的值尽可能小的的参数。而为了找到让损失函数值尽可能小的地方,需要计算参数的导数(准确说是**梯度**),然后以这个导数为指引,逐步更新参数的值。\n\n假设有一个神经网络,我们关注这个网络中某一个权重参数。现在,对这个权重参数的损失函数求导,表示的是“如果稍微改变这个权重参数的值,损失函数会怎么变化”。如果导数的值为负,通过使该权重参数向正方向改变,可以减小损失函数的值;反过来,如果导数的值为正,则通过使该权重参数向负方向改变,可以减小损失函数的值。\n\n>如果导数的值为 0 时,无论权重参数向哪个方向变化,损失函数的值都不变。\n\n如果我们用识别精度(准确率)作为指标,那么绝大多数地方的导数都会变成 0 ,导致参数无法更新。\n\n>假设某个神经网络识别出了 100 个训练数据中的 32 个,这时候准确率为 32%。如果我们以准确率为指标,即使稍微改变权重参数的值,识别的准确率也将继续保持在 32%,不会有变化。也就是说,仅仅微调参数,是无法改善识别精度的。即使有所改善,也不会变成 32.011% 这样连续变化,而是变成 33%,34% 这样离散的值。\n>\n>而如果我们采用**损失函数**作为指标,则当前损失函数的值可以表示为 0.92543...之类的值,而稍微微调一下参数,对应损失函数也会如 0.93431... 这样发生连续的变化。\n\n所以,识别精度对微小的参数变化基本没啥反应,即使有反应,它的值也是不连续地、突然地变化。\n\n回忆之前学习的 **阶跃函数** 和 **sigmoid 函数**:",
"_____no_output_____"
]
],
[
[
"import matplotlib\nprint(matplotlib.__version__)",
"3.1.1\n"
],
[
"import matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline\n%config InlineBackend.figure_format = 'svg' # 生成矢量图\n\ndef sigmoid(x):\n \"\"\"定义sigmoid函数\"\"\"\n return 1.0/(1.0 + np.exp(-x))\n\ndef step_function(x):\n \"\"\"定义阶跃函数\"\"\"\n return np.array(x > 0, dtype=np.int)\n\n# 阶跃函数\nplt.figure(figsize=(8,4))\nplt.subplot(1, 2, 1)\nx = np.arange(-6.0, 6.0, 0.1)\nplt.plot(x, step_function(x))\nplt.axhline(y=0.0,ls='dotted',color='k')\nplt.axhline(y=1.0,ls='dotted',color='k')\nplt.axhline(y=0.5,ls='dotted',color='k')\nplt.yticks([0.0,0.5,1.0])\nplt.ylim(-0.1,1.1)\nplt.xlabel('x')\nplt.ylabel('$step(x)$')\nplt.title('Step Function')\n# plt.savefig(\"pic001.png\", dpi=600) # 保存图片\n\n# sigmoid 函数\nplt.subplot(1, 2, 2)\nplt.plot(x, sigmoid(x))\nplt.axhline(y=0.0,ls='dotted',color='k')\nplt.axhline(y=1.0,ls='dotted',color='k')\nplt.axhline(y=0.5,ls='dotted',color='k')\nplt.yticks([0.0,0.5,1.0])\nplt.ylim(-0.1,1.1)\nplt.xlabel('x')\nplt.ylabel('$sigmoid(x)$')\nplt.title('Sigmoid Function')\n# plt.savefig(\"pic001.png\", dpi=600) # 保存图片\nplt.tight_layout(3) # 间隔\nplt.show()",
"_____no_output_____"
]
],
[
[
"如果我们使用**阶跃函数**作为激活函数,神经网络的学习无法进行。如图,阶跃函数的导数在绝大多数的地方都是 0 ,也就是说,如果我们采用阶跃函数,那么即使将损失函数作为指标,参数的微小变化也会被阶跃函数抹杀,导致损失函数的值没有任何变化。\n\n而 **sigmoid 函数**,如图,不仅函数的输出是连续变化的,曲线的斜率也是连续变化的。也就是说,sigmoid 函数的导数在任何地方都不为 0。得益于这个性质,神经网络的学习得以正确进行。",
"_____no_output_____"
],
[
"## 数值微分\n\n我们使用梯度信息决定前进方向。现在我们会介绍什么是梯度,它有什么性质。\n\n### 导数\n\n相信大家对导数都不陌生。导数就是表示某个瞬间的变化量,定义为:\n\n$$\\large \\frac{{\\rm d}f(x)}{{\\rm d}x} = \\lim_{h\\to 0}\\frac{f(x+h)-f(x)}{h}$$\n\n那么现在我们参考上式实现函数求导:",
"_____no_output_____"
]
],
[
[
"def numerical_diff(f, x):\n \"\"\"不太好的导数实现\"\"\"\n h = 1e-50\n return (f(x + h) - f(x)) / h",
"_____no_output_____"
]
],
[
[
"`numerical_diff` 的命名来源于 **数值微分(numerical differentiation)**。\n\n实际上,我们对 $h$ 赋予了一个很小的值,反倒产生了**舍入误差**:",
"_____no_output_____"
]
],
[
[
"np.float32(1e-50)",
"_____no_output_____"
]
],
[
[
"如果采用 `float32` 类型来表示 $10^{-50}$,就会变成 $0.0$,无法正确表示。这是第一个问题,我们应当将微小值 $h$ 改为 $10^{-4}$,就可以得到正确的结果了。\n\n第二个问题和函数 $f$ 的差分有关。我们虽然实现了计算函数 $f$ 在 $x+h$ 和 $x$ 之间的差分,但是是有误差的。我们实际上计算的是点 $x+h$ 和 $x$ 之间连线的斜率,而真正的导数则是函数在 $x$ 处切线的斜率。出现这个差异的原因是因为 $h$ 不能真的无限接近于 0。\n\n为了减小误差,我们计算函数 $f$ 在 $(x+h)$ 和 $(x-h)$ 之间的差分。因为这种计算方法以 $x$ 为中心,计算左右两边的差分,所以叫**中心差分**,而 $(x+h)$ 和 $x$ 之间的差分叫**前向差分**。\n\n现在改进如下:",
"_____no_output_____"
]
],
[
[
"def numerical_diff(f, x):\n \"\"\"改进后的导数实现\"\"\"\n h = 1e-4\n return (f(x + h) - f(x - h)) / (2 * h)",
"_____no_output_____"
]
],
[
[
"### 数值微分的例子\n\n使用上面的数值微分函数对简单函数求导:\n\n$$\\large y=0.01x^2+0.1x$$\n\n首先我们绘制这个函数的图像。",
"_____no_output_____"
]
],
[
[
"def function_1(x):\n \"\"\"定义函数\"\"\"\n return 0.01 * x**2 + 0.1*x\n\nx = np.arange(0.0, 20.0, 0.1)\ny = function_1(x)\nplt.xlabel('x')\nplt.ylabel('$f(x)$')\nplt.plot(x, y)\nplt.show()",
"_____no_output_____"
]
],
[
[
"计算函数在 $x=5$ 时候的导数,画切线:",
"_____no_output_____"
]
],
[
[
"def tangent_line(f, x):\n \"\"\"切线\"\"\"\n d = numerical_diff(f, x)\n print(d)\n y = f(x) - d*x\n return lambda t: d*t + y\n \nx = np.arange(0.0, 20.0, 0.1)\ny = function_1(x)\nplt.xlabel(\"x\")\nplt.ylabel(\"f(x)\")\n\ntf = tangent_line(function_1, 5)\ny2 = tf(x)\n\nplt.plot(x, y)\nplt.plot(x, y2)\nplt.axvline(x=5,ls='dotted',color='k')\nplt.axhline(y=0.75,ls='dotted',color='k')\nplt.yticks([0, 0.75, 1, 2, 3, 4])\nplt.show()",
"0.1999999999990898\n"
]
],
[
[
"众所周知,$f(x)=0.01x^2+0.1x$ 求导的解析解是 $\\cfrac{{\\rm d}f(x)}{{\\rm d}x}=0.02x+0.1$,因此在 $x=5$ 的时候,“真的导数”为 0.2。和上面的结果比起来,严格来说不一致,但是误差很小。",
"_____no_output_____"
],
[
"### 偏导数\n\n接下来我们看一个新函数,这个函数有两个变量:\n\n$$\\large f(x_0, x_1)=x_0^2+x_1^2$$\n\n其图像的绘制,用代码实现就是如下:",
"_____no_output_____"
]
],
[
[
"from mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\ndef function_2_old(x_0, x_1):\n \"\"\"二元函数\"\"\"\n return x_0**2 + x_1**2\n\n\n\nfig = plt.figure()\nax = Axes3D(fig)\n\nx_0 = np.arange(-2, 2.5, 0.2) # x0 \nx_1 = np.arange(-2, 2.5, 0.2) # x1\nX_0, X_1 = np.meshgrid(x_0, x_1) # 二维数组生成\nY = function_2_old(X_0, X_1)\n\nax.set_xlabel('$x_0$')\nax.set_ylabel('$x_1$')\nax.set_zlabel('$f(x)$')\n\nax.plot_surface(X_0, X_1, Y, rstride=1, cstride=1, cmap='rainbow')\n# ax.view_init(30, 60) # 调整视角\nplt.show()",
"_____no_output_____"
]
],
[
[
"很漂亮的一幅图。\n\n如果我们要对这个二元函数求导,就有必要区分是对 $x_0$ 还是 $x_1$ 求导。\n\n这里讨论的有多个变量函数的导数就是**偏导数**,表示为 $\\cfrac{\\partial f}{\\partial x_0}$、$\\cfrac{\\partial f}{\\partial x_1}$。\n\n当 $x_0=3$,$x_1=4$ 的时候,求关于 $x_0$ 的偏导数$\\cfrac{\\partial f}{\\partial x_0}$:",
"_____no_output_____"
]
],
[
[
"def function_tmp1(x0):\n return x0 * x0 + 4.0**2.0\n\nnumerical_diff(function_tmp1, 3.0)",
"_____no_output_____"
]
],
[
[
"当 $x_0=3$,$x_1=4$ 的时候,求关于 $x_1$ 的偏导数$\\cfrac{\\partial f}{\\partial x_1}$:",
"_____no_output_____"
]
],
[
[
"def function_tmp2(x1):\n return 3.0**2.0 + x1 * x1\n\nnumerical_diff(function_tmp2, 4.0)",
"_____no_output_____"
]
],
[
[
"实际上动笔计算,这两个计算值和解析解的导数基本一致。\n\n所以偏导数和单变量的导数一样,都是求某个地方的**斜率**,不过偏导数需要将多个变量中的某一个变量定为目标变量,然后将其他变量固定为某个值。",
"_____no_output_____"
],
[
"## 梯度\n\n铺垫了这么多,终于到了关键的环节。\n\n我们刚刚计算了 $x_0$ 和 $x_1$ 的偏导数,现在我们要一起计算 $x_0$ 和 $x_1$ 的偏导数。\n\n比如我们考虑求 $x_0=3$,$x_1=4$ 时 $(x_0,x_1)$ 的偏导数 $\\left( \\cfrac{\\partial f}{\\partial x_0},\\cfrac{\\partial f}{\\partial x_1} \\right)$。\n\n>像 $\\left( \\cfrac{\\partial f}{\\partial x_0},\\cfrac{\\partial f}{\\partial x_1} \\right)$ 这样由全部变量的偏导数汇总而成的向量就叫做**梯度**。\n\n我们采用以下代码来计算:",
"_____no_output_____"
]
],
[
[
"def _numerical_gradient_no_batch(f, x):\n \"\"\"\n 计算梯度\n \n 输入:\n f:函数\n x:数组,多元变量。\n \"\"\"\n h = 1e-4 # 0.0001\n grad = np.zeros_like(x) # 生成一个和x形状一样的全为0的数组\n \n for idx in range(x.size):\n tmp_val = x[idx]\n x[idx] = float(tmp_val) + h\n fxh1 = f(x) # f(x+h)\n \n x[idx] = tmp_val - h \n fxh2 = f(x) # f(x-h)\n grad[idx] = (fxh1 - fxh2) / (2*h)\n \n x[idx] = tmp_val # 还原值\n \n return grad\n\ndef function_2(x):\n \"\"\"\n 二元函数\n \n 重新定义一下,此时输入为一个np.array数组\n \"\"\"\n return x[0]**2 + x[1]**2",
"_____no_output_____"
]
],
[
[
"这个代码看起来稍微长一点,但是和求单变量的数值微分本质一样。\n\n现在我们用这个函数实际计算一下梯度:",
"_____no_output_____"
]
],
[
[
"_numerical_gradient_no_batch(function_2, np.array([3.0, 4.0]))",
"_____no_output_____"
],
[
"_numerical_gradient_no_batch(function_2, np.array([0.0, 2.0]))",
"_____no_output_____"
],
[
"_numerical_gradient_no_batch(function_2, np.array([3.0, 0.0]))",
"_____no_output_____"
]
],
[
[
"像这样我们就能计算 $(x_0,x_1)$ 在各个点的梯度了。现在我们要把 $f(x_0,x_1)=x_0^2+x_1^2$ 的梯度画在图上,不过我们画的是**负梯度**的向量。\n\n代码参考:[deep-learning-from-scratch](https://github.com/oreilly-japan/deep-learning-from-scratch/blob/master/ch04/gradient_2d.py)。",
"_____no_output_____"
]
],
[
[
"def numerical_gradient(f, X):\n \"\"\"计算梯度矢量\"\"\"\n if X.ndim == 1:\n return _numerical_gradient_no_batch(f, X)\n else:\n grad = np.zeros_like(X)\n \n for idx, x in enumerate(X):\n grad[idx] = _numerical_gradient_no_batch(f, x)\n \n return grad\n\nx0 = np.arange(-2, 2.5, 0.25)\nx1 = np.arange(-2, 2.5, 0.25)\nX, Y = np.meshgrid(x0, x1)\n\nX = X.flatten()\nY = Y.flatten()\n\ngrad = numerical_gradient(function_2, np.array([X, Y]).T).T\nplt.figure()\nplt.quiver(X, Y, -grad[0], -grad[1], angles=\"xy\",color=\"#666666\")\nplt.xlim([-2, 2])\nplt.ylim([-2, 2])\nplt.xlabel('x0')\nplt.ylabel('x1')\nplt.grid()\nplt.draw()\nplt.show()",
"_____no_output_____"
]
],
[
[
"如图所示,$f(x_0,x_1)=x_0^2+x_1^2$ 的梯度呈现为有向箭头,而且:\n\n- 所有的箭头都指向 $f(x_0,x_1)$ 的“最低处”;\n- 离“最低处”越远,箭头越大。\n\n> 实际上,梯度并非任何时候都指向最低处。\n>\n> 更严格讲,**梯度指示的方向是各点处的函数值减小最多的方向**。\n>\n> 也就是说,我们有可能在某些优化过程中只收敛到了局部最小值。",
"_____no_output_____"
],
[
"### 梯度法\n\n机器学习的主要任务是在训练(学习)过程中寻找最优的参数。这里“最优参数”就是让损失函数取到最小值时的参数。\n\n但是损失函数一般都很复杂(回忆一下 `XGBoost` 的损失函数推导),参数空间很庞大,我们一般不知道它在何处能取得最小值。而使用梯度来寻找函数最小值(或者尽可能小的值)的方法就是梯度法。\n\n>再次提醒:**梯度** 表示的是各点出函数的值减小最多的方向,因此没法保证梯度所指的方向就是函数的最小值或是真正应该前进的方向。实际上在复杂的函数中,梯度指示的方向基本上都 **不是** 函数值的最小位置。\n\n我们沿着梯度方向能够最大限度减小函数(比如损失函数)的值,因此在寻找函数的最小值的位置上还是以梯度信息为线索,决定前进的方向。\n\n这个时候**梯度法**就起作用了。在梯度法中,函数的取值从当前位置沿着梯度方向前进一小步(配合上面的图),然后在新的地方重新求梯度,再沿着梯度方向前进,如此循环往复。\n\n像这样,通过不断地沿着梯度方向前进,逐渐减小函数的值的过程就是**梯度法(gradient method)**,它是解决机器学习中最优化问题的常用方法。\n\n>严格地说,寻找最小值的梯度法叫**梯度下降法**(gradient descent method),而寻找最大值的梯度法称为**梯度上升法**(gradient ascent method),注意和 **提升方法**(Boosting)相区别。\n\n用数学式来表达梯度法,就是:\n\n$$x_0=x_0 - \\eta \\frac{\\partial f}{\\partial x_0}$$\n\n$$x_1=x_1 - \\eta \\frac{\\partial f}{\\partial x_1}$$\n\n其中,$\\eta$,读作 **eta**,表示更新量。回忆一下,在之前的 SKLearn 的机器学习示例中,大多都用 `eta` 作为**学习率(learning rate)**的参数,在神经网络中也是如此。学习率决定在一次学习中,应该学习多少,以及在多大程度上更新参数,就像我们走在下山路上,$\\eta$ 决定了我们每一步迈多远。\n\n上面的公式只更新了一次,我们需要反复执行,逐渐减小函数值。\n\n$\\eta$ 的具体取值不能太大或者太小,否则都没法抵达一个“合适的位置”。在神经网络中,一般会一边改变学习率的值,一般确认训练是否正常进行。\n\n代码参考[gradient_method.py](https://github.com/oreilly-japan/deep-learning-from-scratch/blob/master/ch04/gradient_method.py),用代码实现梯度下降法:",
"_____no_output_____"
]
],
[
[
"def gradient_descent(f, init_x, lr=0.01, step_num=100):\n \"\"\"\n 梯度下降法\n \n f:要进行最优化的参数\n init_x:初始值\n lr:学习率,默认为0.01\n step_sum:梯度下降法重复的次数\n \"\"\"\n x = init_x\n x_history = [] # 保存每一步的信息\n\n for i in range(step_num):\n x_history.append( x.copy() )\n\n grad = numerical_gradient(f, x) # 计算梯度矢量\n x -= lr * grad\n\n return x, np.array(x_history)",
"_____no_output_____"
]
],
[
[
"使用这个函数就能求得函数的极小值,如果顺利,还能求得最小值。\n\n现在我们来求 $f(x_0,x_1)=x_0^2+x_1^2$ 的最小值:",
"_____no_output_____"
]
],
[
[
"init_x = np.array([-3.0, 4.0]) # 初始位置\nresutl = gradient_descent(function_2, init_x=init_x, lr=0.1, step_num=100) # 执行梯度下降算法\nprint(resutl[0])",
"[-6.11110793e-10 8.14814391e-10]\n"
]
],
[
[
"最终结果是 $(-6.11110793\\times10^{-10}, 8.14814391\\times10^{-10})$,非常接近我们已知的正确值 $(0, 0)$。所以说通过梯度下降法我们基本得到了正确的结果。\n\n如果我们把梯度更新的图片画出,如下:",
"_____no_output_____"
]
],
[
[
"init_x = np.array([-3.0, 4.0]) # 初始位置\nlr = 0.1\nstep_num = 20\nx, x_history = gradient_descent(function_2, init_x, lr=lr, step_num=step_num)\n\nstep = 0.01\nx_0 = np.arange(-5,5,step)\nx_1 = np.arange(-5,5,step)\nX, Y = np.meshgrid(x_0, x_1) # 建立网格\nZ = function_2_old(X, Y)\nplt.contour(X, Y, Z, levels=10, linewidths=0.5, linestyles='dashdot') # 绘制等高线\n\nplt.plot(x_history[:,0], x_history[:,1], '.') # 绘制梯度下降过程\n\nplt.xlim(-4.5, 4.5)\nplt.ylim(-4.5, 4.5)\nplt.xlabel(\"$x_0$\")\nplt.ylabel(\"$x_1$\")\nplt.show()",
"_____no_output_____"
]
],
[
[
"前面说过,**学习率**过大或者过小都无法得到好结果。\n\n可以做实验验证一下:",
"_____no_output_____"
]
],
[
[
"# 学习率过大\ninit_x = np.array([-3.0, 4.0]) # 初始位置\nlr = 10.0 # 学习率\n\nx, x_history = gradient_descent(function_2, init_x=init_x, lr=lr, step_num=step_num)\nprint(x)",
"[-2.58983747e+13 -1.29524862e+12]\n"
],
[
"# 学习率过小\ninit_x = np.array([-3.0, 4.0]) # 初始位置\nlr = 1e-10 # 学习率\n\nx, x_history = gradient_descent(function_2, init_x=init_x, lr=lr, step_num=step_num)\nprint(x)",
"[-2.99999999 3.99999998]\n"
]
],
[
[
"由此可见:\n\n- 学习率过大,会发散成一个很大的值;\n- 学习率过小,基本上还没更新就结束了。\n\n因此我们需要设置适当的学习率。记住,学习率是一个**超参数**,通常是人工设定的。",
"_____no_output_____"
],
[
"### 神经网络的梯度\n\n神经网络的训练也是要求梯度的。这里的梯度指的是**损失函数**关于权重参数的梯度。比如,在[3.01 神经网络与前向传播](https://github.com/DrDavidS/basic_Machine_Learning/blob/master/深度学习基础/3.01%20神经网络与前向传播.ipynb)中,我们搭建了一个三层神经网络。其中第一层(layer1)的权重 $W$ 的形状为 $2\\times3$,损失函数用 $L$ 表示。\n\n此时梯度用 $\\cfrac{\\partial L}{\\partial W}$ 表示。用具体的数学表达式(注意下标为了方便说明,和以前不一样)来说,就是:\n\n$$\n\\large\nW=\n\\begin{pmatrix}\nw_{11} & w_{12} & w_{13} \\\\\nw_{21} & w_{22} & w_{23}\\\\\n\\end{pmatrix}\n$$\n\n$$\n\\large\n\\frac{\\partial L}{\\partial W}=\n\\begin{pmatrix}\n\\cfrac{\\partial L}{\\partial w_{11}} & \\cfrac{\\partial L}{\\partial w_{12}} & \\cfrac{\\partial L}{\\partial w_{13}} \\\\\n\\cfrac{\\partial L}{\\partial w_{21}} & \\cfrac{\\partial L}{\\partial w_{22}} & \\cfrac{\\partial L}{\\partial w_{23}}\\\\\n\\end{pmatrix}\n$$\n\n$\\cfrac{\\partial L}{\\partial W}$ 的元素由各个元素关于 $W$ 的偏导数构成。比如,第1行第1列的元素 $\\cfrac{\\partial L}{\\partial w_{11}}$ 表示当 $w_{11}$ 稍微变化的时候,损失函数 $L$ 会发生多大变化。\n\n我们以一个简单的神经网络为例子,来实现求梯度的代码:",
"_____no_output_____"
]
],
[
[
"import os\nimport sys\nimport numpy as np\n\ndef softmax(a):\n \"\"\"定义 softmax 函数\"\"\"\n exp_a = np.exp(a)\n sum_exp_a = np.sum(exp_a)\n y = exp_a / sum_exp_a\n return y\n\ndef cross_entropy_error(y, t):\n \"\"\"定义交叉熵损失函数\"\"\"\n delta = 1e-7\n return -np.sum(t * np.log(y + delta))\n\ndef numerical_gradient(f, X):\n \"\"\"计算梯度矢量\"\"\"\n if X.ndim == 1:\n return _numerical_gradient_no_batch(f, X)\n else:\n grad = np.zeros_like(X)\n \n for idx, x in enumerate(X):\n grad[idx] = _numerical_gradient_no_batch(f, x)\n \n return grad\n\nclass simpleNet:\n def __init__(self):\n \"\"\"初始化\"\"\"\n # self.W = np.random.randn(2, 3) # 高斯分布初始化\n self.W = np.array([[ 0.68851943, 2.06916921, -0.88125086],\n [-1.30951576, 0.72350587, -1.88984482]])\n self.q = 1\n \n def predict(self, x):\n \"\"\"预测\"\"\"\n return np.dot(x, self.W)\n \n def loss(self, x, t):\n \"\"\"损失函数\"\"\"\n z = self.predict(x)\n y = softmax(z)\n loss = cross_entropy_error(y, t)\n \n return loss",
"_____no_output_____"
]
],
[
[
"我们建立了一个名叫 `simpleNet` 的简单神经网络,其中 `softmax` 和 `cross_entropy_error` 都和以前一样。simpleNet 类只有一个实例变量,也就是形状为 $2\\times 3$ 的权重参数矩阵。\n\n网络中有两个方法,一个是前向传播 `predict`,用于预测;另一个是用于求损失函数的 `loss` 。其中参数 `x` 接受输入数据,`t`接受正确标签。\n\n现在我们运行一下看看结果:",
"_____no_output_____"
]
],
[
[
"net = simpleNet()\nprint(net.W) # 权重参数",
"[[ 0.68851943 2.06916921 -0.88125086]\n [-1.30951576 0.72350587 -1.88984482]]\n"
],
[
"x = np.array([0.6, 0.9])\np = net.predict(x) # 预测\nprint(p)",
"[-0.76545253 1.89265681 -2.22961085]\n"
],
[
"np.argmax(p) # 正确解(最大值)的索引",
"_____no_output_____"
],
[
"# 正确解的标签,如果是随机初始化,注意每次运行可能都不一样!!!\nt = np.array([0, 1, 0]) ",
"_____no_output_____"
],
[
"# 损失\nloss1 = net.loss(x, t)\nprint(loss1)",
"0.08276656069658565\n"
]
],
[
[
"现在我们来求**梯度**。我们使用 `numerical_gradient(f, x)` 求梯度:\n\n由于 `numerical_gradient(f, x)` 中的 `f` 是一个函数,所以为了程序兼容,我们先定义函数 `f(W)`:",
"_____no_output_____"
]
],
[
[
"def f(W):\n return net.loss(x, t)",
"_____no_output_____"
],
[
"dW = numerical_gradient(f, net.W)\nprint(dW)",
"[[ 0.03870828 -0.04766044 0.00895216]\n [ 0.05806242 -0.07149067 0.01342824]]\n"
]
],
[
[
"`numerical_gradient(f, net.W)` 的结果是 $dW$,形状是一个 $2\\times 3$ 的矩阵。\n\n观察这个矩阵,在$\\cfrac{\\partial L}{\\partial W}$ 中:\n\n$\\cfrac{\\partial L}{\\partial W_{11}}$的值约为0.039,这表示如果将$w_{11}$ 增加 $h$,则损失函数的值会增加 $0.039h$。\n\n$\\cfrac{\\partial L}{\\partial W_{22}}$的值约为-0.071,这表示如果将$w_{22}$ 增加 $h$,则损失函数的值会减少 $0.071h$。\n\n所以,从减少损失函数的目的出发,$w_{22}$ 应该向正方向更新,而 $w_{11}$ 应该向负方向更新。\n\n我们求出神经网络在输入 $x=[0.6, \\quad 0.9]$ 的梯度以后,只需要根据梯度法,更新权重参数即可。\n\n手动更新试试:",
"_____no_output_____"
]
],
[
[
"# 学习率 lr\nlr = 1e-4\nprint(lr)",
"0.0001\n"
],
[
"class simpleNet_step2:\n def __init__(self):\n \"\"\"初始化,手动更新一次参数\"\"\"\n self.W = np.array([[ 0.68851943 - 0.0001, 2.06916921 + 0.0001, -0.88125086 - 0.0001],\n [-1.30951576 - 0.0001, 0.72350587 + 0.0001, -1.88984482 - 0.0001]])\n self.q = 1\n \n def predict(self, x):\n \"\"\"预测\"\"\"\n return np.dot(x, self.W)\n \n def loss(self, x, t):\n \"\"\"损失函数\"\"\"\n z = self.predict(x)\n y = softmax(z)\n loss = cross_entropy_error(y, t)\n \n return loss",
"_____no_output_____"
],
[
"net = simpleNet_step2()\nnet.W",
"_____no_output_____"
],
[
"x = np.array([0.6, 0.9])\np = net.predict(x) # 预测\nprint(p)",
"[-0.76560253 1.89280681 -2.22976085]\n"
],
[
"# 最大值为正确答案\nt = np.array([0, 1, 0]) ",
"_____no_output_____"
],
[
"# 损失\nloss2 = net.loss(x, t)\nprint(loss2)",
"0.08274273376501982\n"
],
[
"if loss2 < loss1:\n print(\"loss2 比 loss1 小了:\", loss1 - loss2)",
"loss2 比 loss1 小了: 2.3826931565829046e-05\n"
]
],
[
[
"由此可见,我们按照梯度法,更新了权重参数(步长为学习率)以后,损失函数的值下降了。",
"_____no_output_____"
],
[
"## 学习算法总结\n\n到此,我们学习了“损失函数”、“mini-batch”、“梯度”、“梯度下降”等概念。现在回顾一些神经网络的学习步骤:\n\n1. **minibatch**:\n \n 从训练数据中**随机**选出一部分数据,这部分数据称为 mini-batch。我们的目标是减小 mini-batch 的损失函数的值。\n \n >在 PyTorch 中,使用 `torch.utils.data` 实现此功能,参考 [TORCH.UTILS.DATA](https://pytorch.org/docs/stable/data.html#multi-process-data-loading)。\n >\n >在 Tensorflow 中,使用 `tf.data` 实现此功能,参考 [tf.data: Build TensorFlow input pipelines](https://tensorflow.google.cn/guide/data)。\n\n\n2. **计算梯度**:\n\n 为了减小 mini-batch 的损失函数的值,需要求出各个权重参数的梯度。梯度表示损失函数的值减小最多的方向。\n\n\n3. **更新参数**:\n\n 将权重参数 $W$ 沿梯度方向进行微小更新。\n\n\n4. **重复**:\n \n 重复步骤1、步骤2、步骤3。\n \n\n神经网络的学习大概就是按照上面4个步骤进行。这个方法通过梯度下降法更新参数。由于我们使用的数据是**随机**选择的 mini-batch 数据,所以又称为**随机梯度下降(stochastic gradient descent)**。这就是其名称由来。\n\n在大多数深度学习框架中,随机梯度下降法一般由一个名为 **SGD** 的函数来实现:\n\n- TensorFlow:`tf.keras.optimizers.SGD`。\n- PyTorch:`torch.optim.SGD`\n\n实际上,随机梯度下降是通过数值微分实现的,但是缺点是计算上很耗费时间,后续我们会学习**误差反向传播**法,来解决这个问题。",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] | [
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
d0c4b06eea5f0a91ccc1cece89c39e8f714d68a2 | 19,882 | ipynb | Jupyter Notebook | scratch/Lecture01.ipynb | cliburn/sta-663-2017 | 89e059dfff25a4aa427cdec5ded755ab456fbc16 | [
"MIT"
] | 52 | 2017-01-11T03:16:00.000Z | 2021-01-15T05:28:48.000Z | scratch/Lecture01.ipynb | slimdt/Duke_Stat633_2017 | 89e059dfff25a4aa427cdec5ded755ab456fbc16 | [
"MIT"
] | 1 | 2017-04-16T17:10:49.000Z | 2017-04-16T19:13:03.000Z | scratch/Lecture01.ipynb | slimdt/Duke_Stat633_2017 | 89e059dfff25a4aa427cdec5ded755ab456fbc16 | [
"MIT"
] | 47 | 2017-01-13T04:50:54.000Z | 2021-06-23T11:48:33.000Z | 17.847397 | 279 | 0.437884 | [
[
[
"print('Hello, world!')",
"Hello, world!\n"
]
],
[
[
"# Heading 1\n\n## Heading 2\n\n- this\n- is\n- a \n- long, **long**, list\n\n$$\n\\alpha, \\beta, \\gamma\n$$",
"_____no_output_____"
]
],
[
[
"3",
"_____no_output_____"
],
[
"3 * 4 + 5",
"_____no_output_____"
],
[
"3 / 5",
"_____no_output_____"
],
[
"3 // 5",
"_____no_output_____"
],
[
"7 % 5",
"_____no_output_____"
],
[
"2 ** 3",
"_____no_output_____"
],
[
"2 ^ 3",
"_____no_output_____"
],
[
"2 << 4",
"_____no_output_____"
],
[
"3.14",
"_____no_output_____"
],
[
"True, False",
"_____no_output_____"
],
[
"1 == 1",
"_____no_output_____"
],
[
"1 != 1",
"_____no_output_____"
],
[
"None",
"_____no_output_____"
],
[
"'hello'",
"_____no_output_____"
],
[
"\"hello\"",
"_____no_output_____"
],
[
"\"hello's\"",
"_____no_output_____"
],
[
"3 + 4j",
"_____no_output_____"
],
[
"[1,2,3,4]",
"_____no_output_____"
],
[
"[1,'a', 'hello', 3.14, 4j]",
"_____no_output_____"
],
[
"print('''This is\na triple\nquoted string''')",
"This is\na triple\nquoted string\n"
],
[
"set([1,1,2,2,3,4,3,4])",
"_____no_output_____"
],
[
"{'tom': 23, 'anne': 34}",
"_____no_output_____"
],
[
"(1,2,3)",
"_____no_output_____"
],
[
"x = [1,2,3,4]",
"_____no_output_____"
],
[
"x",
"_____no_output_____"
],
[
"x[0]",
"_____no_output_____"
],
[
"x[-1]",
"_____no_output_____"
],
[
"x[0:2]",
"_____no_output_____"
],
[
"x[:2]",
"_____no_output_____"
],
[
"x[1:]",
"_____no_output_____"
],
[
"x[:]",
"_____no_output_____"
],
[
"contacts = {'tom': 23, 'anne': 34}",
"_____no_output_____"
],
[
"contacts['anne']",
"_____no_output_____"
],
[
"contacts.keys()",
"_____no_output_____"
],
[
"contacts.values()",
"_____no_output_____"
],
[
"names = list(contacts.keys())",
"_____no_output_____"
],
[
"names",
"_____no_output_____"
],
[
"names[0][1:3]",
"_____no_output_____"
],
[
"names['bob']",
"_____no_output_____"
],
[
"contacts['bob']",
"_____no_output_____"
],
[
"1/0",
"_____no_output_____"
],
[
"squares = []\nfor i in range(1, 10):\n squares.append(i**2)",
"_____no_output_____"
],
[
"squares",
"_____no_output_____"
],
[
"count = 0\nwhile count < 10:\n print(count)\n if count % 3 == 0:\n break\n count += 1",
"0\n"
],
[
"x = 10\nif x % 3 == 0:\n print('divisible by 3')\nelif x % 5 == 0:\n print('Divisible by 5')\nelse:\n print('something else')",
"Divisible by 5\n"
],
[
"help(range)",
"Help on class range in module builtins:\n\nclass range(object)\n | range(stop) -> range object\n | range(start, stop[, step]) -> range object\n | \n | Return an object that produces a sequence of integers from start (inclusive)\n | to stop (exclusive) by step. range(i, j) produces i, i+1, i+2, ..., j-1.\n | start defaults to 0, and stop is omitted! range(4) produces 0, 1, 2, 3.\n | These are exactly the valid indices for a list of 4 elements.\n | When step is given, it specifies the increment (or decrement).\n | \n | Methods defined here:\n | \n | __contains__(self, key, /)\n | Return key in self.\n | \n | __eq__(self, value, /)\n | Return self==value.\n | \n | __ge__(self, value, /)\n | Return self>=value.\n | \n | __getattribute__(self, name, /)\n | Return getattr(self, name).\n | \n | __getitem__(self, key, /)\n | Return self[key].\n | \n | __gt__(self, value, /)\n | Return self>value.\n | \n | __hash__(self, /)\n | Return hash(self).\n | \n | __iter__(self, /)\n | Implement iter(self).\n | \n | __le__(self, value, /)\n | Return self<=value.\n | \n | __len__(self, /)\n | Return len(self).\n | \n | __lt__(self, value, /)\n | Return self<value.\n | \n | __ne__(self, value, /)\n | Return self!=value.\n | \n | __new__(*args, **kwargs) from builtins.type\n | Create and return a new object. See help(type) for accurate signature.\n | \n | __reduce__(...)\n | helper for pickle\n | \n | __repr__(self, /)\n | Return repr(self).\n | \n | __reversed__(...)\n | Return a reverse iterator.\n | \n | count(...)\n | rangeobject.count(value) -> integer -- return number of occurrences of value\n | \n | index(...)\n | rangeobject.index(value, [start, [stop]]) -> integer -- return index of value.\n | Raise ValueError if the value is not present.\n | \n | ----------------------------------------------------------------------\n | Data descriptors defined here:\n | \n | start\n | \n | step\n | \n | stop\n\n"
]
]
] | [
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c4b847065a7bf660fe039e319e5d9969c3117f | 21,028 | ipynb | Jupyter Notebook | notebooks/03h2_Demo-AEBS-AEJxLPS.ipynb | Swarm-DISC/Swarm_notebooks | 9299b132a0f7af71543d958e8af35651a65e626b | [
"MIT"
] | 3 | 2020-02-13T14:03:46.000Z | 2022-03-04T09:30:45.000Z | notebooks/03h2_Demo-AEBS-AEJxLPS.ipynb | Swarm-DISC/Swarm_notebooks | 9299b132a0f7af71543d958e8af35651a65e626b | [
"MIT"
] | 5 | 2020-02-14T13:27:55.000Z | 2022-02-24T21:48:19.000Z | notebooks/03h2_Demo-AEBS-AEJxLPS.ipynb | Swarm-DISC/Swarm_notebooks | 9299b132a0f7af71543d958e8af35651a65e626b | [
"MIT"
] | 2 | 2020-04-01T17:25:37.000Z | 2020-12-02T12:03:01.000Z | 45.124464 | 271 | 0.513173 | [
[
[
"# AEJxLPS (Auroral electrojets SECS)\n\n> Abstract: Access to the AEBS products, SECS type. This notebook uses code from the previous notebook to build a routine that is flexible to plot either the LC or SECS products - this demonstrates a prototype quicklook routine.",
"_____no_output_____"
]
],
[
[
"%load_ext watermark\n%watermark -i -v -p viresclient,pandas,xarray,matplotlib",
"_____no_output_____"
],
[
"from viresclient import SwarmRequest\nimport datetime as dt\nimport numpy as np\nimport pandas as pd\nimport xarray as xr\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\n\nrequest = SwarmRequest()",
"_____no_output_____"
]
],
[
[
"## AEBS product information\n\nSee previous notebook, \"Demo AEBS products (LC)\", for an introduction to these products.",
"_____no_output_____"
],
[
"### Function to request data from VirES and reshape it",
"_____no_output_____"
]
],
[
[
"def fetch_data(start_time=None, end_time=None, spacecraft=None, AEBS_type=\"L\"):\n \"\"\"DUPLICATED FROM PREVIOUS NOTEBOOK. TO BE REFACTORED\"\"\"\n\n # Fetch data from VirES\n auxiliaries = ['OrbitNumber', 'QDLat', 'QDOrbitDirection', 'OrbitDirection', 'MLT']\n if AEBS_type == \"L\":\n measurement_vars = [\"J_NE\"]\n elif AEBS_type == \"S\":\n measurement_vars = [\"J_CF_NE\", \"J_DF_NE\"]\n # Fetch LPL/LPS\n request.set_collection(f'SW_OPER_AEJ{spacecraft}LP{AEBS_type}_2F')\n request.set_products(\n measurements=measurement_vars,\n auxiliaries=auxiliaries,\n )\n data = request.get_between(start_time, end_time, asynchronous=False, show_progress=False)\n ds_lp = data.as_xarray()\n # Fetch LPL/LPS Quality\n request.set_collection(f'SW_OPER_AEJ{spacecraft}LP{AEBS_type}_2F:Quality')\n request.set_products(\n measurements=['RMS_misfit', 'Confidence'],\n )\n data = request.get_between(start_time, end_time, asynchronous=False, show_progress=False)\n ds_lpq = data.as_xarray()\n # Fetch PBL\n request.set_collection(f'SW_OPER_AEJ{spacecraft}PB{AEBS_type}_2F')\n request.set_products(\n measurements=['PointType', 'Flags'],\n auxiliaries=auxiliaries\n )\n data = request.get_between(start_time, end_time, asynchronous=False, show_progress=False)\n ds_pb = data.as_xarray()\n\n # Meaning of PointType\n PointType_meanings = {\n \"WEJ_peak\": 0, # minimum\n \"EEJ_peak\": 1, # maximum\n \"WEJ_eq_bound_s\": 2, # equatorward (pair start)\n \"EEJ_eq_bound_s\": 3,\n \"WEJ_po_bound_s\": 6, # poleward\n \"EEJ_po_bound_s\": 7,\n \"WEJ_eq_bound_e\": 10, # equatorward (pair end)\n \"EEJ_eq_bound_e\": 11,\n \"WEJ_po_bound_e\": 14, # poleward\n \"EEJ_po_bound_e\": 15,\n }\n # Add new data variables (boolean Type) according to the dictionary above\n ds_pb = ds_pb.assign(\n {name: ds_pb[\"PointType\"] == PointType_meanings[name]\n for name in PointType_meanings.keys()}\n )\n\n # Merge datasets together\n def drop_duplicate_times(_ds):\n _, index = np.unique(_ds['Timestamp'], return_index=True)\n return _ds.isel(Timestamp=index)\n def merge_attrs(_ds1, _ds2):\n attrs = {\"Sources\":[], \"MagneticModels\":[], \"RangeFilters\":[]}\n for item in [\"Sources\", \"MagneticModels\", \"RangeFilters\"]:\n attrs[item] = list(set(_ds1.attrs[item] + _ds2.attrs[item]))\n return attrs\n # Create new dataset from just the newly created PointType arrays\n # This is created on a non-repeating Timestamp coordinate\n ds = xr.Dataset(\n {name: ds_pb[name].where(ds_pb[name], drop=True)\n for name in PointType_meanings.keys()}\n )\n # Merge in the positional and auxiliary data\n data_vars = list(set(ds_pb.data_vars).difference(set(PointType_meanings.keys())))\n data_vars.remove(\"PointType\")\n ds = ds.merge(\n (ds_pb[data_vars]\n .pipe(drop_duplicate_times))\n )\n # Merge together with the LPL data\n # Note that the Timestamp coordinates aren't equal\n\n # Separately merge data with matching and missing time sample points in ds_lpl\n idx_present = list(set(ds[\"Timestamp\"].values).intersection(set(ds_lp[\"Timestamp\"].values)))\n idx_missing = list(set(ds[\"Timestamp\"].values).difference(set(ds_lp[\"Timestamp\"].values)))\n # Override prioritises the first dataset (ds_lpl) where there are conflicts\n ds2 = ds_lp.merge(ds.sel(Timestamp=idx_present), join=\"outer\", compat=\"override\")\n ds2 = ds2.merge(ds.sel(Timestamp=idx_missing), join=\"outer\")\n # Update the metadata\n ds2.attrs = merge_attrs(ds_lp, ds_pb)\n\n # Switch the point type arrays to uint8 or bool for performance?\n # But the .where operations later cast them back to float64 since gaps are filled with nan\n for name in PointType_meanings.keys():\n ds2[name] = ds2[name].astype(\"uint8\").fillna(False)\n # ds2[name] = ds2[name].fillna(False).astype(bool)\n\n ds = ds2\n\n # Append the PBL Flags information into the LPL:Quality dataset to use as a lookup table\n ds_lpq = ds_lpq.assign(\n Flags_PBL=\n ds_pb[\"Flags\"]\n .pipe(drop_duplicate_times)\n .reindex_like(ds_lpq, method=\"nearest\"),\n )\n\n return ds, ds_lpq\n\n\n",
"_____no_output_____"
]
],
[
[
"### Plotting function",
"_____no_output_____"
]
],
[
[
"# Bit numbers which indicate non-nominal state\n# Check SW-DS-DTU-GS-003_AEBS_PDD for details\nBITS_PBL_FLAGS_EEJ_MINOR = (2, 3, 6)\nBITS_PBL_FLAGS_WEJ_MINOR = (4, 5, 6)\nBITS_PBL_FLAGS_EEJ_BAD = (0, 7, 8, 11)\nBITS_PBL_FLAGS_WEJ_BAD = (1, 9, 10, 12)\n\ndef check_PBL_Flags(flags=0b0, EJ_type=\"WEJ\"):\n \"\"\"Return \"good\", \"poor\", or \"bad\" depending on status\"\"\"\n def _check_bits(bitno_set):\n return any(flags & (1 << bitno) for bitno in bitno_set)\n if EJ_type == \"WEJ\":\n if _check_bits(BITS_PBL_FLAGS_WEJ_BAD):\n return \"bad\"\n elif _check_bits(BITS_PBL_FLAGS_WEJ_MINOR):\n return \"poor\"\n else:\n return \"good\"\n elif EJ_type == \"EEJ\":\n if _check_bits(BITS_PBL_FLAGS_EEJ_BAD):\n return \"bad\"\n elif _check_bits(BITS_PBL_FLAGS_EEJ_MINOR):\n return \"poor\"\n else:\n return \"good\"\n\nglyphs = {\n \"WEJ_peak\": {\"marker\": 'v', \"color\":'tab:red'}, # minimum\n \"EEJ_peak\": {\"marker\": '^', \"color\":'tab:purple'}, # maximum\n \"WEJ_eq_bound_s\": {\"marker\": '>', \"color\":'black'}, # equatorward (pair start)\n \"EEJ_eq_bound_s\": {\"marker\": '>', \"color\":'black'},\n \"WEJ_po_bound_s\": {\"marker\": '>', \"color\":'black'}, # poleward\n \"EEJ_po_bound_s\": {\"marker\": '>', \"color\":'black'},\n \"WEJ_eq_bound_e\": {\"marker\": '<', \"color\":'black'}, # equatorward (pair end)\n \"EEJ_eq_bound_e\": {\"marker\": '<', \"color\":'black'},\n \"WEJ_po_bound_e\": {\"marker\": '<', \"color\":'black'}, # poleward\n \"EEJ_po_bound_e\": {\"marker\": '<', \"color\":'black'},\n}\n\n\ndef plot_stack(ds, ds_lpq, hemisphere=\"North\", x_axis=\"Latitude\", AEBS_type=\"L\"):\n # Identify which variable to plot from dataset\n # If accessing the SECS (LPS) data, sum the DF & CF parts\n if \"J_CF_NE\" in ds.data_vars:\n ds[\"J_NE\"] = ds[\"J_DF_NE\"] + ds[\"J_CF_NE\"]\n plotvar = \"J_NE\"\n orbdir = \"OrbitDirection\" if x_axis==\"Latitude\" else \"QDOrbitDirection\"\n markersize = 1 if AEBS_type==\"S\" else 5\n # Select hemisphere\n if hemisphere == \"North\":\n ds = ds.where(ds[\"Latitude\"]>0, drop=True)\n elif hemisphere == \"South\":\n ds = ds.where(ds[\"Latitude\"]<0, drop=True)\n # Generate plot with split by columns: ascending/descending to/from pole\n # by rows: successive orbits\n fig, axes = plt.subplots(\n nrows=len(ds.groupby(\"OrbitNumber\")), ncols=2, sharex=\"col\", sharey=\"all\",\n figsize=(10, 20)\n )\n max_ylim = np.max(np.abs(ds[plotvar].sel({\"NE\": \"E\"})))\n # Loop through each orbit\n for i, (_, ds_orbit) in enumerate(ds.groupby(\"OrbitNumber\")):\n if hemisphere == \"North\":\n ds_orb_asc = ds_orbit.where(ds_orbit[orbdir] == 1, drop=True)\n ds_orb_desc = ds_orbit.where(ds_orbit[orbdir] == -1, drop=True)\n if hemisphere == \"South\":\n ds_orb_asc = ds_orbit.where(ds_orbit[orbdir] == -1, drop=True)\n ds_orb_desc = ds_orbit.where(ds_orbit[orbdir] == 1, drop=True)\n # Loop through ascending and descending sections\n for j, _ds in enumerate((ds_orb_asc, ds_orb_desc)):\n if len(_ds.Timestamp) == 0:\n continue\n # Line plot of current strength\n axes[i, j].plot(\n _ds[x_axis], _ds[plotvar].sel({\"NE\": \"E\"}),\n color=\"tab:blue\", marker=\".\", markersize=markersize, linestyle=\"\"\n )\n axes[i, j].plot(\n _ds[x_axis], _ds[plotvar].sel({\"NE\": \"N\"}),\n color=\"tab:grey\", marker=\".\", markersize=markersize, linestyle=\"\"\n )\n # Plot glyphs at the peaks and boundaries locations\n for name in glyphs.keys():\n __ds = _ds.where(_ds[name], drop=True)\n try:\n for lat in __ds[x_axis]:\n axes[i, j].plot(\n lat, 0,\n marker=glyphs[name][\"marker\"], color=glyphs[name][\"color\"]\n )\n except Exception:\n pass\n # Identify Quality and Flags info\n # Use either the start time of the section or the end, depending on asc or desc\n index = 0 if j == 0 else -1\n t = _ds[\"Timestamp\"].isel(Timestamp=index).values\n _ds_qualflags = ds_lpq.sel(Timestamp=t, method=\"nearest\")\n pbl_flags = int(_ds_qualflags[\"Flags_PBL\"].values)\n lpl_rms_misfit = float(_ds_qualflags[\"RMS_misfit\"].values)\n lpl_confidence = float(_ds_qualflags[\"Confidence\"].values)\n # Shade WEJ and EEJ regions, only if well-defined\n # def _shade_EJ_region(_ds=None, EJ=\"WEJ\", color=\"tab:red\", alpha=0.3):\n wej_status = check_PBL_Flags(pbl_flags, \"WEJ\")\n eej_status = check_PBL_Flags(pbl_flags, \"EEJ\")\n if wej_status in [\"good\", \"poor\"]:\n alpha = 0.3 if wej_status == \"good\" else 0.1\n try:\n WEJ_left = _ds.where(\n (_ds[\"WEJ_eq_bound_s\"] == 1) | (_ds[\"WEJ_po_bound_s\"] == 1), drop=True)\n WEJ_right = _ds.where(\n (_ds[\"WEJ_eq_bound_e\"] == 1) | (_ds[\"WEJ_po_bound_e\"] == 1), drop=True)\n x1 = WEJ_left[x_axis][0]\n x2 = WEJ_right[x_axis][0]\n axes[i, j].fill_betweenx(\n [-max_ylim, max_ylim], [x1, x1], [x2, x2], color=\"tab:red\", alpha=alpha)\n except Exception:\n pass\n if eej_status in [\"good\", \"poor\"]:\n alpha = 0.3 if eej_status == \"good\" else 0.15\n try:\n EEJ_left = _ds.where(\n (_ds[\"EEJ_eq_bound_s\"] == 1) | (_ds[\"EEJ_po_bound_s\"] == 1), drop=True)\n EEJ_right = _ds.where(\n (_ds[\"EEJ_eq_bound_e\"] == 1) | (_ds[\"EEJ_po_bound_e\"] == 1), drop=True)\n x1 = EEJ_left[x_axis][0]\n x2 = EEJ_right[x_axis][0]\n axes[i, j].fill_betweenx(\n [-max_ylim, max_ylim], [x1, x1], [x2, x2], color=\"tab:purple\", alpha=alpha)\n except Exception:\n pass\n # Write the LPL:Quality and PBL Flags info\n ha = \"right\" if j == 0 else \"left\"\n textx = 0.98 if j == 0 else 0.02\n axes[i, j].text(\n textx, 0.95,\n f\"RMS Misfit {np.round(lpl_rms_misfit, 2)}; Confidence {np.round(lpl_confidence, 2)}\",\n transform=axes[i, j].transAxes, verticalalignment=\"top\", horizontalalignment=ha\n )\n axes[i, j].text(\n textx, 0.05,\n f\"PBL Flags {pbl_flags:013b}\",\n transform=axes[i, j].transAxes, verticalalignment=\"bottom\", horizontalalignment=ha\n )\n # Write the start/end time and MLT of the section, and the orbit number\n def _format_utc(t):\n return f\"UTC {t.strftime('%H:%M')}\"\n def _format_mlt(mlt):\n hour, fraction = divmod(mlt, 1)\n t = dt.time(int(hour), minute=int(60*fraction))\n return f\"MLT {t.strftime('%H:%M')}\"\n try:\n # Left part (section starting UTC, MLT, OrbitNumber)\n time_s = pd.to_datetime(ds_orb_asc[\"Timestamp\"].isel(Timestamp=0).data)\n mlt_s = ds_orb_asc[\"MLT\"].dropna(dim=\"Timestamp\").isel(Timestamp=0).data\n orbit_number = int(ds_orb_asc[\"OrbitNumber\"].isel(Timestamp=0).data)\n axes[i, 0].text(\n 0.01, 0.95, f\"{_format_utc(time_s)}\\n{_format_mlt(mlt_s)}\",\n transform=axes[i, 0].transAxes, verticalalignment=\"top\"\n )\n axes[i, 0].text(\n 0.01, 0.05, f\"Orbit {orbit_number}\",\n transform=axes[i, 0].transAxes, verticalalignment=\"bottom\"\n )\n except Exception:\n pass\n try:\n # Right part (section ending UTC, MLT)\n time_e = pd.to_datetime(ds_orb_desc[\"Timestamp\"].isel(Timestamp=-1).data)\n mlt_e = ds_orb_desc[\"MLT\"].dropna(dim=\"Timestamp\").isel(Timestamp=-1).data\n axes[i, 1].text(\n 0.99, 0.95, f\"{_format_utc(time_e)}\\n{_format_mlt(mlt_e)}\",\n transform=axes[i, 1].transAxes, verticalalignment=\"top\", horizontalalignment=\"right\"\n )\n except Exception:\n pass\n # Extra config of axes and figure text\n axes[0, 0].set_ylim(-max_ylim, max_ylim)\n if hemisphere == \"North\":\n axes[0, 0].set_xlim(50, 90)\n axes[0, 1].set_xlim(90, 50)\n elif hemisphere == \"South\":\n axes[0, 0].set_xlim(-50, -90)\n axes[0, 1].set_xlim(-90, -50)\n for ax in axes.flatten():\n ax.grid()\n axes[-1, 0].set_xlabel(x_axis)\n axes[-1, 0].set_ylabel(\"Horizontal currents\\n[ A.km$^{-1}$ ]\")\n time = pd.to_datetime(ds[\"Timestamp\"].isel(Timestamp=0).data)\n spacecraft = ds[\"Spacecraft\"].dropna(dim=\"Timestamp\").isel(Timestamp=0).data\n AEBS_type_name = \"LC\" if AEBS_type == \"L\" else \"SECS\"\n fig.text(\n 0.5, 0.9, f\"{time.strftime('%Y-%m-%d')}\\nSwarm {spacecraft}\\n{hemisphere}\\nAEBS: {AEBS_type_name}\",\n transform=fig.transFigure, horizontalalignment=\"center\",\n )\n fig.subplots_adjust(wspace=0, hspace=0)\n return fig, axes",
"_____no_output_____"
]
],
[
[
"### Fetching and plotting function\n",
"_____no_output_____"
]
],
[
[
"def quicklook(day=\"2015-01-01\", hemisphere=\"North\", spacecraft=\"A\", AEBS_type=\"L\", xaxis=\"Latitude\"):\n start_time = dt.datetime.fromisoformat(day)\n end_time = start_time + dt.timedelta(days=1)\n ds, ds_lpq = fetch_data(start_time, end_time, spacecraft, AEBS_type)\n fig, axes = plot_stack(ds, ds_lpq, hemisphere, xaxis, AEBS_type)\n return ds, fig, axes",
"_____no_output_____"
]
],
[
[
"\nConsecutive orbits are shown in consecutive rows, centered over the pole. The starting and ending times (UTC and MLT) of the orbital section are shown at the left and right. Westward (WEJ) and Eastward (EEJ) electrojet extents and peak intensities are indicated:\n- Blue dots: Estimated current density in Eastward direction, J_NE (E)\n- Grey dots: Estimated current density in Northward direction, J_NE (N)\n- Red/Purple shaded region: WEJ/EEJ extent (boundaries marked by black triangles)\n- Red/Purple triangles: Locations of peak WEJ/EEJ intensity\n\nSelect AEBS_type as S to get SECS results, L to get LC results \nSECS = spherical elementary current systems method \nLC = Line current method\n\nNotes: \nThe code is currently quite fragile, so it is broken on some days. Sometimes the electrojet regions are not shaded correctly. Only the horizontal currents are currently shown.",
"_____no_output_____"
]
],
[
[
"quicklook(day=\"2016-01-01\", hemisphere=\"North\", spacecraft=\"A\", AEBS_type=\"S\", xaxis=\"Latitude\");",
"_____no_output_____"
],
[
"quicklook(day=\"2016-01-01\", hemisphere=\"North\", spacecraft=\"A\", AEBS_type=\"L\", xaxis=\"Latitude\");",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
d0c4bfedcc3421f03ffbcc17c2facf98a3efb6c9 | 46,452 | ipynb | Jupyter Notebook | explore.ipynb | jiashenggu/kb | 0d5ef502169e803d47d5d8d333d23fb510ee0730 | [
"Apache-2.0"
] | null | null | null | explore.ipynb | jiashenggu/kb | 0d5ef502169e803d47d5d8d333d23fb510ee0730 | [
"Apache-2.0"
] | null | null | null | explore.ipynb | jiashenggu/kb | 0d5ef502169e803d47d5d8d333d23fb510ee0730 | [
"Apache-2.0"
] | null | null | null | 122.564644 | 34,091 | 0.599436 | [
[
[
"%load_ext autoreload\n%autoreload 2\nfrom allennlp.commands.evaluate import *\nfrom kb.include_all import *\nfrom allennlp.nn import util as nn_util\nfrom allennlp.common.tqdm import Tqdm\nimport torch\nimport warnings\nwarnings.filterwarnings(\"ignore\")\narchive_file = \"knowbert_wiki_wordnet_model\"\ncuda_device = -1\n# line = \"banana\\tcolor\\tyellow\"",
"/nas/home/gujiashe/miniconda3/envs/knowbert/lib/python3.6/site-packages/sklearn/utils/linear_assignment_.py:22: FutureWarning: The linear_assignment_ module is deprecated in 0.21 and will be removed from 0.23. Use scipy.optimize.linear_sum_assignment instead.\n FutureWarning)\n/nas/home/gujiashe/miniconda3/envs/knowbert/lib/python3.6/site-packages/allennlp/data/token_indexers/token_characters_indexer.py:51: UserWarning: You are using the default value (0) of `min_padding_length`, which can cause some subtle bugs (more info see https://github.com/allenai/allennlp/issues/1954). Strongly recommend to set a value, usually the maximum size of the convolutional layer size when using CnnEncoder.\n UserWarning)\n"
],
[
"# import logging\n\n# logger = logging.getLogger() # 不加名称设置root logger\n# logger.setLevel(logging.INFO)\n# formatter = logging.Formatter(\n# '%(asctime)s - %(name)s - %(levelname)s: - %(message)s',\n# datefmt='%Y-%m-%d %H:%M:%S')\n\n# # 使用FileHandler输出到文件\n# fh = logging.FileHandler('log.txt')\n# fh.setLevel(logging.DEBUG)\n# fh.setFormatter(formatter)\n\n# # 使用StreamHandler输出到屏幕\n# ch = logging.StreamHandler()\n# ch.setLevel(logging.DEBUG)\n# ch.setFormatter(formatter)\n\n# # 添加两个Handler\n# logger.addHandler(ch)\n# logger.addHandler(fh)",
"_____no_output_____"
],
[
"archive = load_archive(archive_file, cuda_device)\nconfig = archive.config\nprepare_environment(config)",
"_____no_output_____"
],
[
"config = Params.from_file(\"/nas/home/gujiashe/kb/knowbert_wiki_wordnet_model/config.json\")",
"_____no_output_____"
],
[
"reader_params = config.pop('dataset_reader')\nif reader_params['type'] == 'multitask_reader':\n reader_params = reader_params['dataset_readers']['language_modeling']\n# reader_params['num_workers'] = 0\nvalidation_reader_params = {\n \"type\": \"kg_probe\",\n \"tokenizer_and_candidate_generator\": reader_params['base_reader']['tokenizer_and_candidate_generator'].as_dict()\n}\ndataset_reader = DatasetReader.from_params(Params(validation_reader_params))\n\nvocab = dataset_reader._tokenizer_and_candidate_generator.bert_tokenizer.vocab\ntoken2word = {}\nfor k, v in vocab.items():\n token2word[v] = k",
"_____no_output_____"
],
[
"dataset_path = \"/nas/home/gujiashe/trans/knowbert_ppl_top10.tsv\"\n# birth_property = dataset_path.split('_')[1].split('.')[0]\ninstances = dataset_reader.read(dataset_path)\nprint(instances[0])",
"9960it [00:15, 648.26it/s] "
],
[
"import pandas as pd\n \n# birth_df = pd.read_csv('/nas/home/gujiashe/kb/sentences_birth.tsv', sep='\\t', header = None)\nbirth_df = pd.read_csv(dataset_path, sep='\\t', header = None)\n\nbirth = birth_df.values\nprint(birth[0])",
"['5 7'\n 'Selina Keagon gave birth to Cornelius Tuayan Keagon at Monrovia on 06 July 1996 .']\n"
],
[
"instances[0][\"lm_label_ids\"][2]",
"_____no_output_____"
],
[
"import csv\nmodel = archive.model\nmodel.eval()\n\n\nprint(\"start\")\n# metrics = evaluate(model, instances, iterator, cuda_device, \"\")\ndata_iterator = DataIterator.from_params(Params(\n {\"type\": \"basic\", \"batch_size\": 1}\n))\ndata_iterator.index_with(model.vocab)\niterator = data_iterator(instances,\n num_epochs=1,\n shuffle=False)\nlogger.info(\"Iterating over dataset\")\ngenerator_tqdm = Tqdm.tqdm(iterator, total=data_iterator.get_num_batches(instances))\nrows_id = 0\n# with open('birth3.txt', 'wt') as f:\nbirth_spreadsheet = open(birth_property+\"_spreadsheet_knowbert.tsv\", \"w\")\ntsv_writer = csv.writer(birth_spreadsheet, delimiter='\\t')\n\ntotal_ranks = []\n\nfor instance in generator_tqdm:\n\n rows_id+=1\n\n batch = nn_util.move_to_device(instance, cuda_device)\n output_dict = model(**batch)\n pooled_output = output_dict.get(\"pooled_output\")\n contextual_embeddings = output_dict.get(\"contextual_embeddings\")\n prediction_scores, seq_relationship_score = model.pretraining_heads(\n contextual_embeddings, pooled_output\n )\n prediction_scores = prediction_scores.view(-1, prediction_scores.shape[-1])\n \n\n ranks = torch.argsort(prediction_scores, dim = 1, descending=True)\n ranks = torch.argsort(ranks, dim = 1)\n vals, idxs = torch.topk(prediction_scores, k = 5, dim = 1)\n idxs = idxs.cpu().numpy()\n lines = []\n # print(\"row: \", rows_id, file=f)\n # print(\"================\", file=f)\n # print(\"source: \", birth[rows_id - 1, 1], file = f)\n masked_tokens = []\n for id in range(len(idxs)):\n masked_tokens += [token2word[instance[\"tokens\"][\"tokens\"][0][id].item()]]\n masked_tokens = \" \".join(masked_tokens[1: -1])\n masked_ranks = []\n \n for k in range(1):\n source = []\n line = []\n for i, idx in enumerate(idxs):\n if instance[\"tokens\"][\"tokens\"][0][i] != 103:\n line += [token2word[instance[\"tokens\"][\"tokens\"][0][i].item()]]\n else:\n line += [token2word[idx[k]]]\n text_id = instance[\"lm_label_ids\"][\"lm_labels\"][0][i].item()\n masked_ranks += [ranks[i][text_id].item()]\n \n line = \" \".join(line[1: -1])\n line = line.split(\" ##\")\n line = \"\".join(line)\n total_ranks += masked_ranks\n masked_ranks = list(map(str, masked_ranks))\n masked_ranks = \",\".join(masked_ranks)\n # print(\"{} : \".format(k) + line, file=f)\n # print(\"masked_ranks: \", masked_ranks , file = f)\n # print(\"masked_tokens: \", masked_tokens, file = f)\n # print(\"================\", file=f)\n row = [birth[rows_id - 1, 1], masked_tokens, line, masked_ranks]\n \n tsv_writer.writerow(row)\n# if rows_id>100:\n# break\nbirth_spreadsheet.close()\n",
"2022-03-29 20:39:39 - allennlp.common.from_params - INFO: - instantiating class <class 'allennlp.data.iterators.data_iterator.DataIterator'> from params {'type': 'basic', 'batch_size': 1} and extras set()\n2022-03-29 20:39:39 - allennlp.common.params - INFO: - type = basic\n2022-03-29 20:39:39 - allennlp.common.from_params - INFO: - instantiating class <class 'allennlp.data.iterators.basic_iterator.BasicIterator'> from params {'batch_size': 1} and extras set()\n2022-03-29 20:39:39 - allennlp.common.params - INFO: - batch_size = 1\n2022-03-29 20:39:39 - allennlp.common.params - INFO: - instances_per_epoch = None\n2022-03-29 20:39:39 - allennlp.common.params - INFO: - max_instances_in_memory = None\n2022-03-29 20:39:39 - allennlp.common.params - INFO: - cache_instances = False\n2022-03-29 20:39:39 - allennlp.common.params - INFO: - track_epoch = False\n2022-03-29 20:39:39 - allennlp.common.params - INFO: - maximum_samples_per_batch = None\n2022-03-29 20:39:39 - root - INFO: - Iterating over dataset\n"
],
[
"%matplotlib notebook\nimport matplotlib.pyplot as plt\nplt.hist(total_ranks, bins = 100, range = [0, 100])\nplt.savefig(birth_property+'_knowbert.jpg')\n",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c4d7b4a0b0b48daa70b3e098395b6eda506f9d | 203,637 | ipynb | Jupyter Notebook | Task IV.ipynb | Chang-LeHung/2021-Autumn-Deep-Learning | 9905e4533a55b480641a9fe55947c4b9db0372f1 | [
"MIT"
] | 1 | 2021-10-12T02:46:17.000Z | 2021-10-12T02:46:17.000Z | Task IV.ipynb | Chang-LeHung/2021-Autumn-Deep-Learning | 9905e4533a55b480641a9fe55947c4b9db0372f1 | [
"MIT"
] | null | null | null | Task IV.ipynb | Chang-LeHung/2021-Autumn-Deep-Learning | 9905e4533a55b480641a9fe55947c4b9db0372f1 | [
"MIT"
] | 1 | 2021-11-30T03:46:14.000Z | 2021-11-30T03:46:14.000Z | 145.976344 | 75,436 | 0.789002 | [
[
[
"import torch\nfrom torch import nn, optim\nfrom torch.utils.data import DataLoader, Dataset\nfrom torchvision import datasets, transforms\nfrom torchvision.utils import make_grid\nimport matplotlib\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nfrom IPython import display\nimport torchsummary as ts\nimport numpy as np",
"_____no_output_____"
],
[
"sns.set()\ndisplay.set_matplotlib_formats(\"svg\")\nplt.rcParams['font.sans-serif'] = \"Liberation Sans\"\ndevice = torch.device(\"cuda\")",
"_____no_output_____"
],
[
"torch.cuda.is_available()",
"_____no_output_____"
],
[
"trans = transforms.Compose([\n transforms.Resize((32, 32)),\n transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))\n])",
"_____no_output_____"
],
[
"dataset = datasets.ImageFolder(\"dataset/faces/\", transform=trans)",
"_____no_output_____"
],
[
"data_loader = DataLoader(dataset, batch_size=16, shuffle=True, num_workers=4, \n drop_last=True)",
"_____no_output_____"
],
[
"images = make_grid(next(iter(data_loader))[0], normalize=True, padding=5, pad_value=1)\nplt.imshow(images.permute(1, 2, 0))\nplt.axis(\"off\")\nplt.grid(False)",
"_____no_output_____"
],
[
"def imshow(data):\n \n images = make_grid(data.detach().cpu() , normalize=True, padding=5, pad_value=1)\n plt.imshow(images.permute(1, 2, 0))\n plt.axis(\"off\")\n plt.grid(False)\n plt.pause(0.0001)",
"_____no_output_____"
],
[
"def weights_init(m):\n classname = m.__class__.__name__\n if classname.find('Conv') != -1:\n nn.init.normal_(m.weight.data, 0.0, 0.02)\n elif classname.find('BatchNorm') != -1:\n nn.init.normal_(m.weight.data, 1.0, 0.02)\n nn.init.constant_(m.bias.data, 0)",
"_____no_output_____"
],
[
"class Discriminator(nn.Module):\n \n def __init__(self):\n super().__init__()\n self.main = nn.Sequential(\n nn.Conv2d(in_channels=3, out_channels=6, kernel_size=4, stride=2,\n padding=1),\n nn.LeakyReLU(0.2, inplace=True),\n \n nn.Conv2d(in_channels=6, out_channels=12, kernel_size=4, stride=2,\n padding=1),\n nn.BatchNorm2d(12),\n nn.LeakyReLU(0.2, inplace=True),\n \n nn.Conv2d(in_channels=12, out_channels=24, kernel_size=4, stride=2,\n padding=1),\n nn.BatchNorm2d(24),\n nn.LeakyReLU(0.2, inplace=True),\n \n nn.Conv2d(24, 1, 4, 1, 0, bias=False),\n nn.Sigmoid()\n )\n \n def forward(self, x):\n x = self.main(x)\n x = x.reshape(-1)\n return x\n",
"_____no_output_____"
],
[
"class Generator(nn.Module):\n \n def __init__(self, init_size=100):\n super().__init__()\n self.expand_dim = nn.Linear(init_size, 1024)\n self.init_size = init_size\n self.main = nn.Sequential(\n nn.ConvTranspose2d(64, 32, kernel_size=4, \n stride=2, padding=1, bias=False),\n nn.BatchNorm2d(32),\n nn.ReLU(),\n \n nn.ConvTranspose2d(32, 12, kernel_size=4, \n stride=2, padding=1, bias=False),\n nn.BatchNorm2d(12),\n nn.ReLU(),\n \n nn.ConvTranspose2d(12, 3, kernel_size=4, \n stride=2, padding=1, bias=False),\n nn.Tanh()\n )\n \n def forward(self, x):\n x = self.expand_dim(x).reshape(-1, 64, 4, 4)\n x = self.main(x)\n return x",
"_____no_output_____"
],
[
"netD = Discriminator()",
"_____no_output_____"
],
[
"netD(torch.randn(16, 3, 32, 32))",
"_____no_output_____"
],
[
"netG = Generator()",
"_____no_output_____"
],
[
"netG(torch.randn(16, 100)).shape",
"_____no_output_____"
],
[
"BATCH_SIZE = 64\nININT_SIZE = 100\ndata_loader = DataLoader(dataset, batch_size=BATCH_SIZE, \n shuffle=True, num_workers=4, drop_last=True)",
"_____no_output_____"
],
[
"Epoch = 1000\nD_losses = []\nG_losses = []",
"_____no_output_____"
],
[
"generator = Generator(ININT_SIZE).to(device)\ndiscirminator = Discriminator().to(device)\ngenerator.apply(weights_init)\ndiscirminator.apply(weights_init)\ncriterion = nn.BCELoss()\nOPTIMIZER_G = optim.Adam(generator.parameters(), lr=4e-4, betas=(0.5, 0.999))\nOPTIMIZER_D = optim.Adam(discirminator.parameters(), lr=1e-4, betas=(0.5, 0.999))",
"_____no_output_____"
],
[
"pdr, pdf, pg = None, None, None\nfor epoch in range(1, 1 + Epoch):\n dis_temp_loss = []\n gen_temp_loss = []\n for idx, (d, l) in enumerate(data_loader):\n \n d = d.to(device)\n l = l.float().to(device)\n \n out = discirminator(d)\n pdr = out.mean().item()\n real_loss = criterion(out, torch.ones_like(l))\n \n noise = torch.randn(BATCH_SIZE, ININT_SIZE).to(device)\n images = generator(noise)\n out = discirminator(images.detach().to(device))\n pdf = out.mean().item()\n fake_loss = criterion(out, torch.zeros_like(l))\n \n OPTIMIZER_D.zero_grad()\n real_loss.backward()\n fake_loss.backward()\n OPTIMIZER_D.step()\n \n noise = torch.randn(BATCH_SIZE, ININT_SIZE).to(device)\n images = generator(noise)\n out = discirminator(images)\n pg = out.mean().item()\n loss = criterion(out, torch.ones_like(l))\n \n OPTIMIZER_G.zero_grad()\n loss.backward()\n OPTIMIZER_G.step()\n \n d_loss = fake_loss + real_loss\n \n print(\"Epoch = {:<2} Step[{:3}/{:3}] Dis-Loss = {:.5f} Gen-Loss = {:.5f} acc = {} {} {}\"\\\n .format(epoch, idx + 1, len(data_loader), d_loss.item(), \n loss.item(), pdr, pdf, pg))\n \n dis_temp_loss.append(d_loss.item())\n gen_temp_loss.append(loss.item())\n D_losses.append(np.mean(dis_temp_loss))\n G_losses.append(np.mean(gen_temp_loss))\n \n if epoch > 1:\n fig, ax = plt.subplots()\n ax.plot(np.arange(len(D_losses)) + 1, \n D_losses, label=\"Discriminator\", ls=\"-.\")\n ax.plot(np.arange(len(G_losses)) + 1, \n G_losses, label=\"Generator\", ls=\"--\")\n ax.set_xlabel(\"Epoch\")\n ax.set_ylabel(\"Loss\")\n ax.set_title(\"GAN Training process\")\n ax.legend(bbox_to_anchor=[1, 1.02])\n plt.pause(0.0001)\n imshow(images[:16])\n imshow(d[:16])\n if epoch % 10 == 0:\n display.clear_output()",
"Epoch = 11 Step[ 1/800] Dis-Loss = 1.38383 Gen-Loss = 0.72730 acc = 0.4860272705554962 0.4842662513256073 0.48328953981399536\nEpoch = 11 Step[ 2/800] Dis-Loss = 1.38039 Gen-Loss = 0.72655 acc = 0.486137717962265 0.4826032221317291 0.4836145043373108\nEpoch = 11 Step[ 3/800] Dis-Loss = 1.37738 Gen-Loss = 0.72841 acc = 0.4880523681640625 0.4830998182296753 0.4827190041542053\nEpoch = 11 Step[ 4/800] Dis-Loss = 1.38269 Gen-Loss = 0.72815 acc = 0.48720768094062805 0.4848710298538208 0.4828382730484009\nEpoch = 11 Step[ 5/800] Dis-Loss = 1.37848 Gen-Loss = 0.72858 acc = 0.4882913827896118 0.483909547328949 0.48263925313949585\nEpoch = 11 Step[ 6/800] Dis-Loss = 1.38180 Gen-Loss = 0.72358 acc = 0.4878328740596771 0.4851211905479431 0.48507219552993774\nEpoch = 11 Step[ 7/800] Dis-Loss = 1.37879 Gen-Loss = 0.72100 acc = 0.48954012989997864 0.4853658080101013 0.4863145351409912\nEpoch = 11 Step[ 8/800] Dis-Loss = 1.38134 Gen-Loss = 0.72638 acc = 0.48867276310920715 0.48573219776153564 0.48372161388397217\nEpoch = 11 Step[ 9/800] Dis-Loss = 1.38069 Gen-Loss = 0.71519 acc = 0.48869192600250244 0.48541706800460815 0.48914510011672974\nEpoch = 11 Step[ 10/800] Dis-Loss = 1.38357 Gen-Loss = 0.72644 acc = 0.48992031812667847 0.48820018768310547 0.48368561267852783\nEpoch = 11 Step[ 11/800] Dis-Loss = 1.39168 Gen-Loss = 0.71431 acc = 0.48612266778945923 0.48833757638931274 0.4896155893802643\nEpoch = 11 Step[ 12/800] Dis-Loss = 1.38771 Gen-Loss = 0.71736 acc = 0.48988470435142517 0.4903064966201782 0.4881060719490051\nEpoch = 11 Step[ 13/800] Dis-Loss = 1.39010 Gen-Loss = 0.71396 acc = 0.4893726706504822 0.49092572927474976 0.48978322744369507\nEpoch = 11 Step[ 14/800] Dis-Loss = 1.39196 Gen-Loss = 0.70987 acc = 0.49027377367019653 0.4927903413772583 0.49176132678985596\nEpoch = 11 Step[ 15/800] Dis-Loss = 1.39200 Gen-Loss = 0.70740 acc = 0.4895239472389221 0.49208325147628784 0.4930088222026825\nEpoch = 11 Step[ 16/800] Dis-Loss = 1.39668 Gen-Loss = 0.70158 acc = 0.4921491742134094 0.49703508615493774 0.49589377641677856\nEpoch = 11 Step[ 17/800] Dis-Loss = 1.39471 Gen-Loss = 0.69483 acc = 0.494412899017334 0.49843329191207886 0.4992305636405945\nEpoch = 11 Step[ 18/800] Dis-Loss = 1.39666 Gen-Loss = 0.69413 acc = 0.49439537525177 0.49936795234680176 0.49956727027893066\nEpoch = 11 Step[ 19/800] Dis-Loss = 1.39547 Gen-Loss = 0.68894 acc = 0.4965556263923645 0.5009547472000122 0.50217604637146\nEpoch = 11 Step[ 20/800] Dis-Loss = 1.40014 Gen-Loss = 0.68731 acc = 0.49795830249786377 0.5047087669372559 0.5029927492141724\nEpoch = 11 Step[ 21/800] Dis-Loss = 1.39939 Gen-Loss = 0.68157 acc = 0.4992484450340271 0.5055797100067139 0.5059032440185547\nEpoch = 11 Step[ 22/800] Dis-Loss = 1.39685 Gen-Loss = 0.68350 acc = 0.5011044144630432 0.5062112808227539 0.5048987865447998\nEpoch = 11 Step[ 23/800] Dis-Loss = 1.40106 Gen-Loss = 0.67878 acc = 0.5000138878822327 0.5072113871574402 0.5073151588439941\nEpoch = 11 Step[ 24/800] Dis-Loss = 1.39461 Gen-Loss = 0.67364 acc = 0.5038366317749023 0.5077934265136719 0.5099223256111145\nEpoch = 11 Step[ 25/800] Dis-Loss = 1.39969 Gen-Loss = 0.67624 acc = 0.5037130117416382 0.5101500153541565 0.5085829496383667\nEpoch = 11 Step[ 26/800] Dis-Loss = 1.39465 Gen-Loss = 0.67797 acc = 0.5068079829216003 0.5106897354125977 0.5076940059661865\nEpoch = 11 Step[ 27/800] Dis-Loss = 1.39140 Gen-Loss = 0.67540 acc = 0.5069887042045593 0.5093010663986206 0.509019136428833\nEpoch = 11 Step[ 28/800] Dis-Loss = 1.39471 Gen-Loss = 0.67204 acc = 0.5065065026283264 0.5104171633720398 0.5107471942901611\nEpoch = 11 Step[ 29/800] Dis-Loss = 1.38948 Gen-Loss = 0.67597 acc = 0.5073249340057373 0.50869220495224 0.5087141990661621\nEpoch = 11 Step[ 30/800] Dis-Loss = 1.39110 Gen-Loss = 0.67697 acc = 0.5070934295654297 0.5092823505401611 0.5082111358642578\nEpoch = 11 Step[ 31/800] Dis-Loss = 1.38411 Gen-Loss = 0.67654 acc = 0.5093710422515869 0.5080138444900513 0.5084192156791687\nEpoch = 11 Step[ 32/800] Dis-Loss = 1.38841 Gen-Loss = 0.67672 acc = 0.5088425278663635 0.5096365809440613 0.5083272457122803\nEpoch = 11 Step[ 33/800] Dis-Loss = 1.38650 Gen-Loss = 0.68095 acc = 0.5076881647109985 0.5076009035110474 0.5061988830566406\nEpoch = 11 Step[ 34/800] Dis-Loss = 1.38216 Gen-Loss = 0.68059 acc = 0.5090674161911011 0.5067769289016724 0.506359338760376\nEpoch = 11 Step[ 35/800] Dis-Loss = 1.38291 Gen-Loss = 0.68288 acc = 0.5087659358978271 0.5068477392196655 0.5051974058151245\nEpoch = 11 Step[ 36/800] Dis-Loss = 1.37935 Gen-Loss = 0.68255 acc = 0.5101670026779175 0.5064420104026794 0.5053704977035522\nEpoch = 11 Step[ 37/800] Dis-Loss = 1.38334 Gen-Loss = 0.68530 acc = 0.5082207918167114 0.5065510272979736 0.5039876699447632\nEpoch = 11 Step[ 38/800] Dis-Loss = 1.37668 Gen-Loss = 0.68741 acc = 0.5093458890914917 0.5043520927429199 0.502923846244812\nEpoch = 11 Step[ 39/800] Dis-Loss = 1.37456 Gen-Loss = 0.68532 acc = 0.5088659524917603 0.5028347373008728 0.5039559602737427\nEpoch = 11 Step[ 40/800] Dis-Loss = 1.37922 Gen-Loss = 0.68712 acc = 0.5078629851341248 0.5041408538818359 0.5030476450920105\nEpoch = 11 Step[ 41/800] Dis-Loss = 1.38091 Gen-Loss = 0.68730 acc = 0.5076212882995605 0.5047438740730286 0.502979040145874\nEpoch = 11 Step[ 42/800] Dis-Loss = 1.38140 Gen-Loss = 0.68322 acc = 0.5097119808197021 0.5069875717163086 0.5050288438796997\nEpoch = 11 Step[ 43/800] Dis-Loss = 1.38365 Gen-Loss = 0.68254 acc = 0.5085858702659607 0.5070288181304932 0.5053671002388\nEpoch = 11 Step[ 44/800] Dis-Loss = 1.38717 Gen-Loss = 0.67986 acc = 0.5069579482078552 0.5071951746940613 0.5067316889762878\nEpoch = 11 Step[ 45/800] Dis-Loss = 1.37742 Gen-Loss = 0.68101 acc = 0.5098463296890259 0.5051684379577637 0.5061638951301575\nEpoch = 11 Step[ 46/800] Dis-Loss = 1.38623 Gen-Loss = 0.68450 acc = 0.5086610317230225 0.5083296298980713 0.5043853521347046\nEpoch = 11 Step[ 47/800] Dis-Loss = 1.38086 Gen-Loss = 0.68222 acc = 0.5096978545188904 0.50670325756073 0.505557119846344\nEpoch = 11 Step[ 48/800] Dis-Loss = 1.38609 Gen-Loss = 0.67954 acc = 0.5061302185058594 0.5058160424232483 0.5069276094436646\nEpoch = 11 Step[ 49/800] Dis-Loss = 1.38656 Gen-Loss = 0.69063 acc = 0.5063884258270264 0.5062928199768066 0.5013158917427063\nEpoch = 11 Step[ 50/800] Dis-Loss = 1.38415 Gen-Loss = 0.68900 acc = 0.504163384437561 0.5028757452964783 0.5021712779998779\nEpoch = 11 Step[ 51/800] Dis-Loss = 1.38663 Gen-Loss = 0.68714 acc = 0.5043010711669922 0.504252016544342 0.5031150579452515\nEpoch = 11 Step[ 52/800] Dis-Loss = 1.38415 Gen-Loss = 0.68537 acc = 0.5039762258529663 0.5026707053184509 0.5040122270584106\nEpoch = 11 Step[ 53/800] Dis-Loss = 1.38548 Gen-Loss = 0.69654 acc = 0.5032011270523071 0.5025256872177124 0.4984489977359772\nEpoch = 11 Step[ 54/800] Dis-Loss = 1.38128 Gen-Loss = 0.69541 acc = 0.502758264541626 0.5000064373016357 0.49900683760643005\nEpoch = 11 Step[ 55/800] Dis-Loss = 1.38459 Gen-Loss = 0.69836 acc = 0.5013798475265503 0.5003198981285095 0.4975772500038147\nEpoch = 11 Step[ 56/800] Dis-Loss = 1.37750 Gen-Loss = 0.69566 acc = 0.50372314453125 0.49901631474494934 0.49887794256210327\nEpoch = 11 Step[ 57/800] Dis-Loss = 1.38543 Gen-Loss = 0.69380 acc = 0.5009276866912842 0.5001643300056458 0.4998044967651367\nEpoch = 11 Step[ 58/800] Dis-Loss = 1.38301 Gen-Loss = 0.69909 acc = 0.5004873275756836 0.49859797954559326 0.4971603751182556\nEpoch = 11 Step[ 59/800] Dis-Loss = 1.39570 Gen-Loss = 0.69949 acc = 0.49632200598716736 0.500684380531311 0.4969950318336487\nEpoch = 11 Step[ 60/800] Dis-Loss = 1.37884 Gen-Loss = 0.70543 acc = 0.5015406608581543 0.49751681089401245 0.4940814971923828\nEpoch = 11 Step[ 61/800] Dis-Loss = 1.39020 Gen-Loss = 0.70349 acc = 0.4962139129638672 0.497819185256958 0.49501579999923706\nEpoch = 11 Step[ 62/800] Dis-Loss = 1.38603 Gen-Loss = 0.70537 acc = 0.4972054362297058 0.4967914819717407 0.49411043524742126\nEpoch = 11 Step[ 63/800] Dis-Loss = 1.39143 Gen-Loss = 0.70318 acc = 0.49520349502563477 0.4973457455635071 0.49512335658073425\nEpoch = 11 Step[ 64/800] Dis-Loss = 1.38293 Gen-Loss = 0.70611 acc = 0.49810147285461426 0.49607402086257935 0.4937043786048889\nEpoch = 11 Step[ 65/800] Dis-Loss = 1.38572 Gen-Loss = 0.70159 acc = 0.4953227639198303 0.49473828077316284 0.49592092633247375\nEpoch = 11 Step[ 66/800] Dis-Loss = 1.38925 Gen-Loss = 0.70155 acc = 0.49480295181274414 0.49589723348617554 0.4959692358970642\nEpoch = 11 Step[ 67/800] Dis-Loss = 1.38732 Gen-Loss = 0.70520 acc = 0.495777428150177 0.49604350328445435 0.49409234523773193\nEpoch = 11 Step[ 68/800] Dis-Loss = 1.39470 Gen-Loss = 0.70520 acc = 0.49266529083251953 0.4965123236179352 0.4941348433494568\n"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c4e04936ee06463e0d121f009ae1efda92a4a8 | 2,895 | ipynb | Jupyter Notebook | code/my_program/low_pass_test.ipynb | xing710/ModSimPy | 87f0f481926c40855223e2843bd728edb235c516 | [
"MIT"
] | null | null | null | code/my_program/low_pass_test.ipynb | xing710/ModSimPy | 87f0f481926c40855223e2843bd728edb235c516 | [
"MIT"
] | null | null | null | code/my_program/low_pass_test.ipynb | xing710/ModSimPy | 87f0f481926c40855223e2843bd728edb235c516 | [
"MIT"
] | null | null | null | 30.797872 | 94 | 0.545769 | [
[
[
"import numpy as np\nfrom scipy.signal import butter, lfilter, freqz\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\ndef butter_lowpass(cutoff, fs, order=5):\n nyq = 0.5 * fs\n normal_cutoff = cutoff / nyq\n b, a = butter(order, normal_cutoff, btype='low', analog=False)\n return b, a\n\ndef butter_lowpass_filter(data, cutoff, fs, order=5):\n b, a = butter_lowpass(cutoff, fs, order=order)\n y = lfilter(b, a, data)\n return y\n\ndata = pd.read_csv('data/ecg_data1.csv')\nprint(data.head())\necg_data=data.CH1\n# Filter requirements.\norder = 15\nfs = 200.0 # sample rate, Hz\ncutoff = 25 # desired cutoff frequency of the filter, Hz\n\n# Get the filter coefficients so we can check its frequency response.\nb, a = butter_lowpass(cutoff, fs, order)\n\n\n# Plot the frequency response.\nw, h = freqz(b, a, worN=8000)\nplt.subplot(2, 1, 1)\nplt.plot(0.5*fs*w/np.pi, np.abs(h), 'b')\nplt.plot(cutoff, 0.5*np.sqrt(2), 'ko')\nplt.axvline(cutoff, color='k')\nplt.xlim(0, 0.5*fs)\nplt.title(\"Lowpass Filter Frequency Response\")\nplt.xlabel('Frequency [Hz]')\nplt.grid()\n# Demonstrate the use of the filter.\n# First make some data to be filtered.\nT = 5.0 # seconds\nn = int(T * fs) # total number of samples\nt = np.linspace(0, T, n, endpoint=False)\n# \"Noisy\" data. We want to recover the 1.2 Hz signal from this.\n# data = np.sin(1.2*2*np.pi*t) + 1.5*np.cos(9*2*np.pi*t) + 0.5*np.sin(12.0*2*np.pi*t)\n\n# Filter the data, and plot both the original and filtered signals.\ny = butter_lowpass_filter(ecg_data, cutoff, fs, order)\ny = butter_lowpass_filter(y, cutoff, fs, order)\n\nplt.subplot(2, 1, 2)\nplt.plot(ecg_data[1000:], 'b-', label='data')\nplt.plot(y[1000:], 'g-', linewidth=2, label='filtered data')\nplt.xlabel('Time [sec]')\nplt.grid()\nplt.legend()\n\nplt.subplots_adjust(hspace=0.35)\nplt.show()\n ",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code"
]
] |
d0c4f5090d3188ea16c205586196e0fba84d4ab1 | 716,456 | ipynb | Jupyter Notebook | GrowingPlant/GP.ipynb | crestdsl/case-studies | 4365e65f3339bf205379e2ab6df2d838ee2000e6 | [
"MIT"
] | null | null | null | GrowingPlant/GP.ipynb | crestdsl/case-studies | 4365e65f3339bf205379e2ab6df2d838ee2000e6 | [
"MIT"
] | null | null | null | GrowingPlant/GP.ipynb | crestdsl/case-studies | 4365e65f3339bf205379e2ab6df2d838ee2000e6 | [
"MIT"
] | null | null | null | 71.1052 | 34,598 | 0.518058 | [
[
[
"# this is a little trick to make sure the the notebook takes up most of the screen:\nfrom IPython.display import HTML\ndisplay(HTML(\"<style>.container { width:90% !important; }</style>\"))\n\n# Recommendation to leave the logging config like this, otherwise you'll be flooded with unnecessary info\nimport logging\nlogging.basicConfig(level=logging.WARNING, format='%(levelname)s:%(message)s') \n\n\n# Recommendation: logging config like this, otherwise you'll be flooded with unnecessary information\nimport logging\nlogging.basicConfig(level=logging.ERROR)\n\nimport sys\nsys.path.append('../')",
"_____no_output_____"
],
[
"# import all modeling concepts\nfrom crestdsl.model import *\n\n# import the simulator\nfrom crestdsl.simulation import Simulator\n\n# import the plotting libraries that can visualise the CREST systems\nfrom crestdsl.ui import elk\n\n# we will create tests for each Entity\nimport unittest\n\nclass TestClass(unittest.TestCase):\n @classmethod\n def runall(cls):\n tests = unittest.TestLoader().loadTestsFromTestCase(cls)\n return unittest.TextTestRunner().run(tests)\n \n",
"_____no_output_____"
],
[
"class Resources(object):\n electricity = Resource(\"Watt\", REAL)\n switch = Resource(\"switch\", [\"on\", \"off\"])\n pourcent = Resource(\"%\", REAL) \n light = Resource(\"Lumen\", INTEGER)\n time = Resource(\"minutes\", REAL)\n water = Resource(\"litre\", REAL)\n celsius = Resource(\"Celsius\", REAL) \n counter = Resource(\"Count\", INTEGER)\n fahrenheit = Resource(\"Fahrenheit\", REAL)\n boolean = Resource(\"bool\", BOOL)\n presence = Resource(\"presence\", [\"detected\", \"no presence\"])\n onOffAuto = Resource(\"onOffAutoSwitch\", [\"on\", \"off\", \"auto\"])\n integer = Resource(\"integer\", INTEGER)\n weight = Resource(\"kg\", REAL)\n lenght = Resource(\"m\", REAL)\n area = Resource(\"m²\", REAL)",
"_____no_output_____"
],
[
"class ElectricalDevice(object):\n electricity_in = Input(Resources.electricity, value=0)\n req_electricity_out = Output(Resources.electricity, value=0)\n \nclass WaterDevice(object):\n water_in = Input(Resources.water, value=0)\n req_water_out = Output(Resources.water, value=0)",
"_____no_output_____"
],
[
"class LightElement(Entity):\n \"\"\"This is a definition of a new Entity type. It derives from CREST's Entity base class.\"\"\"\n \n \"\"\"we define ports - each has a resource and an initial value\"\"\"\n electricity = Input(resource=Resources.electricity, value=0)\n light = Output(resource=Resources.light, value=0)\n \n \"\"\"automaton states - don't forget to specify one as the current state\"\"\"\n on = State()\n off = current = State()\n \n \"\"\"transitions and guards (as lambdas)\"\"\"\n off_to_on = Transition(source=off, target=on, guard=(lambda self: self.electricity.value >= 100))\n on_to_off = Transition(source=on, target=off, guard=(lambda self: self.electricity.value < 100))\n \n \"\"\"\n update functions. They are related to a state, define the port to be updated and return the port's new value\n Remember that updates need two parameters: self and dt.\n \"\"\"\n @update(state=on, target=light)\n def set_light_on(self, dt=0):\n return 800\n\n @update(state=off, target=light)\n def set_light_off(self, dt=0):\n return 0",
"_____no_output_____"
],
[
"class HeatElement(Entity):\n \"\"\" Ports \"\"\"\n electricity = Input(resource=Resources.electricity, value=0)\n switch = Input(resource=Resources.switch, value=\"off\") # the heatelement has its own switch\n heat = Output(resource=Resources.celsius, value=0) # and produces a celsius value (i.e. the temperature increase underneath the lamp)\n \n \"\"\" Automaton (States) \"\"\"\n state = current = State() # the only state of this entity\n \n \"\"\"Update\"\"\"\n @update(state=state, target=heat)\n def heat_output(self, dt):\n # When the lamp is on, then we convert electricity to temperature at a rate of 100Watt = 1Celsius\n if self.switch.value == \"on\":\n return self.electricity.value / 100\n else:\n return 0\n\n# show us what it looks like\nelk.plot(HeatElement())",
"_____no_output_____"
],
[
"# a logical entity (this one sums two values)\nclass Adder(LogicalEntity):\n heat_in = Input(resource=Resources.celsius, value=0)\n room_temp_in = Input(resource=Resources.celsius, value=22)\n temperature = Output(resource=Resources.celsius, value=22)\n \n state = current = State()\n @update(state=state, target=temperature)\n def add(self, dt):\n return self.heat_in.value + self.room_temp_in.value\n \nelk.plot(Adder()) # try adding the display option 'show_update_ports=True' and see what happens!",
"_____no_output_____"
],
[
"# Entity composed by the 3 subentities above, heats and lights the plant to an optimal value\n\nclass GrowLamp(Entity):\n \n \"\"\" - - - - - - - PORTS - - - - - - - - - - \"\"\"\n electricity = Input(resource=Resources.electricity, value=0)\n switch = Input(resource=Resources.switch, value=\"off\")\n heat_switch = Input(resource=Resources.switch, value=\"on\")\n room_temperature = Input(resource=Resources.fahrenheit, value=71.6)\n \n light = Output(resource=Resources.light, value=3.1415*1000) # note that these are bogus values for now\n temperature = Output(resource=Resources.celsius, value=42) # yes, nonsense..., they are updated when simulated\n \n on_time = Local(resource=Resources.time, value=0)\n on_count = Local(resource=Resources.counter, value=0)\n \n \"\"\" - - - - - - - SUBENTITIES - - - - - - - - - - \"\"\"\n lightelement = LightElement()\n heatelement = HeatElement()\n adder = Adder()\n \n \n \"\"\" - - - - - - - INFLUENCES - - - - - - - - - - \"\"\"\n \"\"\"\n Influences specify a source port and a target port. \n They are always executed, independent of the automaton's state.\n Since they are called directly with the source-port's value, a self-parameter is not necessary.\n \"\"\"\n @influence(source=room_temperature, target=adder.room_temp_in)\n def fahrenheit_to_celsius(value):\n return (value - 32) * 5 / 9\n \n # we can also define updates and influences with lambda functions... \n heat_to_add = Influence(source=heatelement.heat, target=adder.heat_in, function=(lambda val: val))\n \n # if the lambda function doesn't do anything (like the one above) we can omit it entirely...\n add_to_temp = Influence(source=adder.temperature, target=temperature)\n light_to_light = Influence(source=lightelement.light, target=light)\n heat_switch_influence = Influence(source=heat_switch, target=heatelement.switch)\n \n \n \"\"\" - - - - - - - STATES & TRANSITIONS - - - - - - - - - - \"\"\"\n on = State()\n off = current = State()\n error = State()\n \n off_to_on = Transition(source=off, target=on, guard=(lambda self: self.switch.value == \"on\" and self.electricity.value >= 100))\n on_to_off = Transition(source=on, target=off, guard=(lambda self: self.switch.value == \"off\" or self.electricity.value < 100))\n \n # transition to error state if the lamp ran for more than 1000.5 time units\n @transition(source=on, target=error)\n def to_error(self):\n \"\"\"More complex transitions can be defined as a function. We can use variables and calculations\"\"\"\n timeout = self.on_time.value >= 1000.5\n heat_is_on = self.heatelement.switch.value == \"on\"\n return timeout and heat_is_on\n \n \"\"\" - - - - - - - UPDATES - - - - - - - - - - \"\"\"\n # LAMP is OFF or ERROR\n @update(state=[off, error], target=lightelement.electricity)\n def update_light_elec_off(self, dt):\n # no electricity\n return 0\n\n @update(state=[off, error], target=heatelement.electricity)\n def update_heat_elec_off(self, dt):\n # no electricity\n return 0\n \n \n \n # LAMP is ON\n @update(state=on, target=lightelement.electricity)\n def update_light_elec_on(self, dt):\n # the lightelement gets the first 100Watt\n return 100\n \n @update(state=on, target=heatelement.electricity)\n def update_heat_elec_on(self, dt):\n # the heatelement gets the rest\n return self.electricity.value - 100\n \n @update(state=on, target=on_time)\n def update_time(self, dt):\n # also update the on_time so we know whether we overheat\n return self.on_time.value + dt\n \n \"\"\" - - - - - - - ACTIONS - - - - - - - - - - \"\"\"\n # let's add an action that counts the number of times we switch to state \"on\"\n @action(transition=off_to_on, target=on_count)\n def count_switching_on(self):\n \"\"\"\n Actions are functions that are executed when the related transition is fired.\n Note that actions do not have a dt.\n \"\"\"\n return self.on_count.value + 1\n\n# create an instance!\nelk.plot(GrowLamp())",
"_____no_output_____"
],
[
"# Really bad model of the light send by the sun taken from the \"SmartHome\" file\n\n\nclass Sun(Entity):\n \n \"\"\" - - - - - - - PORTS - - - - - - - - - - \"\"\"\n \n time_In = Input(Resources.time, 0)\n time_Local = Local(Resources.time, 0)\n light_Out = Output(Resources.light, 0)\n \n \"\"\" - - - - - - - SUBENTITIES - - - - - - - - - - \"\"\"\n #None\n \n \"\"\" - - - - - - - INFLUENCES - - - - - - - - - - \"\"\"\n #None\n \n \"\"\" - - - - - - - STATES & TRANSITIONS - - - - - - - - - - \"\"\" \n\n state = current = State()\n \n time_propagation = Influence(source = time_In, target = time_Local)\n \n \"\"\" - - - - - - - UPDATES - - - - - - - - - - \"\"\"\n\n #Je pars ici du principe que dt = 1min\n #On a donc 60*24 = 1440min/jour\n \n \"CREER INFLUENCE ENTRE LES VARIABLES DE TEMPS\"\n \n @update(state = state, target = light_Out)\n def update_light_out(self, dt):\n \n if(self.time_Local.value%1440 > 21*60):\n light = 0\n elif(self.time_Local.value%1440 > 16*60):\n light = 20000*(21*60-self.time_Local.value)//60 #On veut 20000*un nombre flottant entre 0 et 5\n elif(self.time_Local.value%1440 > 12*60):\n light = 100000\n elif(self.time_Local.value%1440 > 7*60):\n light = 20000*(abs(self.time_Local.value - 7*60))//60 #On veut 20000*un nombre flottant entre 0 et 5\n else:\n light = 0\n \n return light\n \n#elk.plot(Sun())\n\nsun=Sun()\nsun.time_In.value = 691\nssim=Simulator(sun)\nssim.stabilize()\nssim.plot()",
"_____no_output_____"
],
[
"# Used to take water from the grid and sends it to the Plant Reservoir\n\nclass Pump(Entity):\n \n \"\"\" - - - - - - - PORTS - - - - - - - - - - \"\"\"\n \n water_in = Input(Resources.water, 100)\n \n size_pipe = Local(Resources.water, 2)\n \n water_send = Output(Resources.water, 2)\n\n \n \"\"\" - - - - - - - STATES & TRANSITIONS - - - - - - - - - - \"\"\" \n\n state = current = State()\n \n \"\"\" - - - - - - - - - - UPDATES - - - - - - - - - - - - \"\"\"\n \n @update(state = state, target = water_send)\n def update_water_send(self, dt):\n \n return min(self.water_in.value, self.size_pipe.value)\n \nelk.plot(Pump())",
"_____no_output_____"
],
[
"# Contains the water for the plants himidity, fill itself with the pump\n\n@dependency(source=\"water_in\", target=\"water_send\")\nclass PlantReservoir(Entity):\n \n \"\"\" - - - - - - - PORTS - - - - - - - - - - \"\"\"\n \n water_in = Input(Resources.water, 2)\n water_needed = Input(Resources.water, 2)\n water_send = Output(Resources.water, 2)\n max_cap = Local(Resources.water, 10)\n actual_cap = Local(Resources.water, 1)\n \n \"\"\" - - - - - - - STATES & TRANSITIONS - - - - - - - - - - \"\"\" \n\n off = current = State()\n draining = State()\n filling = State()\n \n off_to_draining = Transition(source=off, target=draining, guard=(lambda self: self.water_needed.value > 0 and self.actual_cap.value >= 2))\n off_to_filling = Transition(source=off, target=filling, guard=(lambda self: self.water_needed.value > 0 and self.actual_cap.value < 2))\n\n filling_to_draining = Transition(source=filling, target=draining, guard=(lambda self: self.actual_cap.value >= self.max_cap.value and self.water_needed.value > 0))\n filling_to_off = Transition(source=filling, target=off, guard=(lambda self: (self.actual_cap.value >= self.max_cap.value and self.water_needed.value == 0) ))# or self.water_in.value == 0))\n \n draining_to_filling = Transition(source=draining, target=filling, guard=(lambda self: self.actual_cap.value <= 2 or (self.water_needed.value == 0 and self.actual_cap.value < self.max_cap.value)))\n draining_to_off = Transition(source = draining, target = off, guard=(lambda self: self.actual_cap.value >= self.max_cap.value and self.water_needed.value == 0))\n \n \"\"\" - - - - - - - - - - INFLUENCES - - - - - - - - - - - - \"\"\"\n \n \n \n @update(state = draining, target = actual_cap)\n def update_actual_cap_drain(self, dt):\n \n return max(self.actual_cap.value - self.water_needed.value*dt,0)\n \n @update(state = filling, target = actual_cap)\n def update_actual_cap_fill(self, dt):\n \n return min(self.actual_cap.value + self.water_in.value*dt, self.max_cap.value)\n \n influ_water_pump = Influence(target = water_send, source = water_needed)\n\n \nelk.plot(PlantReservoir())",
"_____no_output_____"
],
[
"# Entity representing the plants humidity, adapting the consumption for the reservoir\n\nclass Plants(Entity):\n \n \"\"\" - - - - - - - PORTS - - - - - - - - - - \"\"\"\n \n water_in = Input(Resources.water, 0)\n \n water_cons = Local(Resources.water, 2)\n \n actual_humidity = Output(Resources.pourcent, 1)\n \n water_needed = Output(Resources.water,2)\n \n \"\"\" - - - - - - - STATES & TRANSITIONS - - - - - - - - - - \"\"\" \n\n watering = current = State()\n off = State()\n \n off_to_watering = Transition(source=off, target=watering, guard=(lambda self: self.actual_humidity.value < 50))# and self.water_in.value >= self.water_cons.value))\n\n watering_to_off = Transition(source=watering, target=off, guard=(lambda self: self.actual_humidity.value > 70))# or self.water_in.value < self.water_cons.value))\n\n \n \n \"\"\" - - - - - - - - - - INFLUENCES - - - - - - - - - - - - \"\"\"\n \n @update(state = watering, target = actual_humidity)\n def update_actual_humidity_watering(self, dt):\n \n return max(self.actual_humidity.value + 5*dt,0)\n \n @update(state = off, target = actual_humidity)\n def update_actual_humidity_off(self, dt):\n \n return max(self.actual_humidity.value - dt,0)\n \n @update(state = watering, target = water_needed)\n def update_water_needed_watering(self, dt):\n \n return self.water_cons.value\n \n @update(state = off, target = water_needed)\n def update_water_needed_off(self, dt):\n \n return 0\n \n \n\n \nelk.plot(Plants())",
"_____no_output_____"
],
[
"# Entity made with all the subentities above, take account of the humidity, the room temperature and the time (for the sun),\n#The output \"plant_is_ok is a boolean, true if all 3 are in optimal ranges\"\n\nclass GrowingPlant(Entity):\n\n \"\"\" - - - - - - - PORTS - - - - - - - - - - \"\"\"\n electricity_in = Input(Resources.electricity, 200)\n room_temp = Input(Resources.fahrenheit, 0)\n time = Local(Resources.time, 0)\n plant_is_ok = Output(Resources.boolean, False)\n \n def __init__(self, starting_time=0):\n self.time.value = starting_time\n\n \n \"\"\" - - - - - - - SUBENTITIES - - - - - - - - - - \"\"\"\n \n gl = GrowLamp()\n s = Sun()\n pump = Pump()\n pr = PlantReservoir()\n pl = Plants()\n \n \"\"\" - - - - - - - INFLUENCES - - - - - - - - - - \"\"\"\n \n time_propagation = Influence(source=time, target = s.time_In)\n water_propagation = Influence(source = pump.water_send, target = pr.water_in)\n water_propagation2 = Influence(source = pr.water_send, target = pl.water_in)\n elec_propagation = Influence(source = electricity_in, target = gl.electricity)\n #needed_water_propagation = Influence(source = pr.water_send, target = pump.water_needed)\n needed_water_propagation2 = Influence(source = pl.water_needed, target = pr.water_needed)\n\n \n \"\"\" - - - - - - - STATES & TRANSITIONS - - - - - - - - - - \"\"\"\n state = current = State()\n \n \"\"\" - - - - - - - UPDATES - - - - - - - - - - \"\"\"\n \n @update(state=state, target=gl.room_temperature)\n def set_gl_room_temp(self,dt):\n return self.room_temp.value + (self.s.light_Out.value/(1000000)-0.1)*dt\n \n @update(state=state, target=plant_is_ok)\n def set_plant_is_ok(self,dt):\n return ((self.pl.actual_humidity.value>=50)and(self.pl.actual_humidity.value<=70)\n and(self.gl.temperature.value>=20)and(self.gl.temperature.value<=25))\n \n \n#elk.plot(GrowingPlant())\n\ngPlant = GrowingPlant()\ngPlant.room_temp.value = 68\ngPlant.gl.switch.value = \"on\"\ngPlant.pump.water_in.value = 100\ngPlant.pl.water_in.value = 100\nsimuLamp = Simulator(gPlant)\nsimuLamp.stabilize()\n#simuLamp.advance(3)\nfor i in range(5):\n simuLamp.advance(1)\nsimuLamp.plot()\nsimuLamp.traces.plot(traces=[gPlant.gl.temperature])\nsimuLamp.traces.plot(traces=[gPlant.pl.actual_humidity])\nsimuLamp.traces.plot(traces=[gPlant.plant_is_ok])",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c51731da49642188fd085b599215e34af4c9b3 | 342,588 | ipynb | Jupyter Notebook | Parkinsons/Dataset Analysis.ipynb | jayanta-banik/DataScience | 631fda79ef2e16700dfd4d3706f3fcd0cdf8645f | [
"MIT"
] | null | null | null | Parkinsons/Dataset Analysis.ipynb | jayanta-banik/DataScience | 631fda79ef2e16700dfd4d3706f3fcd0cdf8645f | [
"MIT"
] | null | null | null | Parkinsons/Dataset Analysis.ipynb | jayanta-banik/DataScience | 631fda79ef2e16700dfd4d3706f3fcd0cdf8645f | [
"MIT"
] | null | null | null | 920.935484 | 158,040 | 0.956548 | [
[
[
"# <center>Dataset Anaylsis</center>",
"_____no_output_____"
]
],
[
[
"%%html\n<style>\nbody {\n font-family: \"Apple Script\", cursive, sans-serif;\n}\n</style> ",
"_____no_output_____"
]
],
[
[
"_importing necessary libraries of Data Science_",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport cv2\nfrom matplotlib import pyplot as plt\nimport os",
"_____no_output_____"
]
],
[
[
"_making a function for showing image through open cv_",
"_____no_output_____"
]
],
[
[
"def imshow(img):\n cv2.imshow('image',img)\n cv2.waitKey(0)\n cv2.destroyAllWindows()",
"_____no_output_____"
]
],
[
[
"_________",
"_____no_output_____"
],
[
"_read one data out of the dataset and viewing the image_",
"_____no_output_____"
]
],
[
[
"# reading data using opencv\nimg = cv2.imread(r\"PatientSpiral\\sp1-P1.jpg\")\ngrey_img = cv2.imread(r\"PatientSpiral\\sp1-P1.jpg\", cv2.IMREAD_GRAYSCALE)",
"_____no_output_____"
],
[
"# viewing data using matplotlib\nplt.imshow(img)\n# plt.imshow(grey_img)",
"_____no_output_____"
]
],
[
[
"_As we can see from above image, there is two diffrent pattern. one is question and one is answer, and they both overlap each other, we need to decompose them into __question__ and __answer__ to get accurate comparision across the dataset._",
"_____no_output_____"
]
],
[
[
"Q_img = np.zeros(np.shape(grey_img))\nA_img = np.zeros(np.shape(grey_img))",
"_____no_output_____"
]
],
[
[
"#### extracting question",
"_____no_output_____"
],
[
"Setting bias as 30 for black colour",
"_____no_output_____"
]
],
[
[
"for i,e in enumerate(grey_img):\n for j,f in enumerate(e):\n if f > 30:\n Q_img[i][j] = 255 \n else:\n Q_img[i][j] = 0",
"_____no_output_____"
]
],
[
[
"If we consider that the darkness of shade is also a factor in diagnosing the \nParkinson's disease, as it may reflect the muscle strength of the Person then\ndon't use the else statement in above code<br>\n__Points to consider:__\n- This may increase the ambiguity as the lightning conditions do not remain constant during image capture\n- Muscle strength also depends on the person irrespective of disease (not the age,weight factor)\n- Variance of darkness due to age, height, weight factor will be handled later by the model\n- Camera quality may also increase ambiguity\n- Pen and paper quality also contribute to ambiguity, but has nothing to do with the disease itself.\n- not considering the darkness of ink may leave out variables that represent the deterioration of muscle control that is a significant part of Diagnosis",
"_____no_output_____"
]
],
[
[
"imshow(Q_img)",
"_____no_output_____"
]
],
[
[
"#### extracting answer",
"_____no_output_____"
],
[
"<font TimesNewRoman>Setting +ve bias as __60__ for _removing black colour_ and -ve bias as __160__ for _removing white noise_ </font>",
"_____no_output_____"
]
],
[
[
"for i,e in enumerate(grey_img):\n for j,f in enumerate(e):\n if f < 60 or f > 160:\n A_img[i][j] = 255\n else:\n A_img[i][j] = 0",
"_____no_output_____"
]
],
[
[
"Again don't use the else statement in the above code if considering darkness of ink as a variable",
"_____no_output_____"
]
],
[
[
"imshow(A_img)",
"_____no_output_____"
],
[
"plt.imshow(img)",
"_____no_output_____"
],
[
"plt.imshow(Q_img)",
"_____no_output_____"
],
[
"plt.imshow(A_img)",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
d0c5336b8ee3435b68d25cc698e9d3b68a296ca4 | 152,178 | ipynb | Jupyter Notebook | analysis/graph/.ipynb_checkpoints/simple_cycles-checkpoint.ipynb | harangju/avalanche | 9193eb4e4accd1398519fe43c29cfd91ed989dbd | [
"MIT"
] | null | null | null | analysis/graph/.ipynb_checkpoints/simple_cycles-checkpoint.ipynb | harangju/avalanche | 9193eb4e4accd1398519fe43c29cfd91ed989dbd | [
"MIT"
] | null | null | null | analysis/graph/.ipynb_checkpoints/simple_cycles-checkpoint.ipynb | harangju/avalanche | 9193eb4e4accd1398519fe43c29cfd91ed989dbd | [
"MIT"
] | null | null | null | 422.716667 | 90,828 | 0.932592 | [
[
[
"import networkx as nx\nimport matplotlib.pyplot as plt\nimport numpy as np",
"_____no_output_____"
],
[
"# edges = [(0, 0), (0, 1), (0, 2), (1, 2), (2, 0), (2, 1), (2, 2)]\nedges = [(0,0), (0,1), (1,0), (0, 3), (3, 0), (1, 3), (3, 1)]\nG = nx.DiGraph(edges)\nlist(nx.simple_cycles(G))\ntype(edges)",
"_____no_output_____"
],
[
"plt.subplot()\nnx.draw(G, with_labels=True)",
"_____no_output_____"
],
[
"import scipy.io as sio",
"_____no_output_____"
],
[
"mat = sio.loadmat('/Users/harangju/Developer/i=5_2.mat', squeeze_me = True)\nedges = mat['e']\nweights = mat['w']\nG = nx.DiGraph(edges.tolist())\nplt.subplot()\nnx.draw(G, with_labels=True)\ncycle_gen = nx.simple_cycles(G)\ncycles = list(cycle_gen)\nlen(cycles)\n# cycles",
"_____no_output_____"
],
[
"cycle_count = np.zeros((10, 10))\nfor i in range(11):\n for j in range(10):\n mat = sio.loadmat('/Users/harangju/Developer/i=' + str(i+1) + '_' + str(j+1) + '.mat', squeeze_me = True)\n edges = mat['e']\n weights = mat['w']\n G = nx.DiGraph(edges.tolist())\n cycle_gen = nx.simple_cycles(G)\n cycles = list(cycle_gen)\n cycle_count[i,j] = len(cycles)\n print(cycle_count[i,j])",
"0.0\n0.0\n0.0\n0.0\n0.0\n0.0\n0.0\n0.0\n0.0\n0.0\n2.0\n47.0\n4.0\n3.0\n0.0\n17.0\n51.0\n5.0\n2.0\n0.0\n2.0\n19.0\n2.0\n16.0\n3.0\n6.0\n9.0\n94.0\n63.0\n36.0\n13.0\n29.0\n85.0\n57.0\n9.0\n6.0\n12.0\n12.0\n5.0\n59.0\n1.0\n6.0\n9.0\n85.0\n41.0\n31.0\n188.0\n18.0\n79.0\n72.0\n20.0\n73.0\n7.0\n20.0\n72.0\n89.0\n47.0\n35.0\n133.0\n77.0\n171.0\n172.0\n84.0\n57.0\n117.0\n578.0\n19.0\n90.0\n132.0\n39.0\n82.0\n21.0\n102.0\n34.0\n87.0\n134.0\n309.0\n136.0\n113.0\n141.0\n70.0\n236.0\n76.0\n132.0\n168.0\n162.0\n29.0\n152.0\n448.0\n394.0\n559.0\n126.0\n96.0\n406.0\n247.0\n61.0\n428.0\n206.0\n73.0\n165.0\n"
],
[
"np.mean(cycle_count,1)\n# np.std(cycle_count,1)",
"_____no_output_____"
],
[
"mat = sio.loadmat('/Users/harangju/Developer/matlab.mat', squeeze_me = True)\nedges = mat['e']\nweights = mat['w']\nG = nx.DiGraph(edges.tolist())\nplt.subplot()\nnx.draw(G, with_labels=True)\ncycle_gen = nx.simple_cycles(G)\ncycles = list(cycle_gen)\nlen(cycles)",
"_____no_output_____"
],
[
"# find weights of cycles\ncycle_weights = np.ones((len(cycles),1))\nfor i in range(0,len(cycles)):\n for j in range(0,len(cycles[i])-1):\n node1 = cycles[i][j]\n node2 = cycles[i][j+1]\n idx_edge = edges.tolist().index([node1, node2])\n cycle_weights[i] = cycle_weights[i] * weights[idx_edge]",
"_____no_output_____"
],
[
"# find expected duration of cycle\ncycle_dur_exp = np.ones((len(cycles),1))\nfor i in range(0,len(cycles)):\n cycle_dur_exp[i] = len(cycles[i]) * cycle_weights[i]\n# cycle_dur_exp",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c55249bfb5ba0f3538799624ecde3b58409f46 | 23,500 | ipynb | Jupyter Notebook | Machine Learning/14. Naive Bayes 1.ipynb | Shaon2221/Learning-and-Experimenting_Data-Science | 817a402158c1cf5d77ce2ea92b3e91470851deec | [
"MIT"
] | 15 | 2020-08-20T17:45:12.000Z | 2022-03-08T20:06:49.000Z | Machine Learning/14. Naive Bayes 1.ipynb | Shaon2221/Learning-and-Experiment_Data-Science | 7effb0b624ab478f33e1c688ca90319001555adb | [
"MIT"
] | null | null | null | Machine Learning/14. Naive Bayes 1.ipynb | Shaon2221/Learning-and-Experiment_Data-Science | 7effb0b624ab478f33e1c688ca90319001555adb | [
"MIT"
] | 5 | 2020-08-20T18:41:49.000Z | 2020-09-03T09:10:24.000Z | 27.230591 | 277 | 0.407872 | [
[
[
"# Naive Bayes",
"_____no_output_____"
],
[
"<img src=nb1.png height=400 width=600>\n<img src=nb2.png height=400 width=600>\n<img src=nb3.png height=400 width=600>\n<img src=nb4.png height=400 width=600>\n<img src=nb5.png height=400 width=600>\n<img src=nb6.png height=400 width=600>\n<img src=nb7.png height=400 width=600>\n<img src=nb10.png height=400 width=600>",
"_____no_output_____"
],
[
"# Implementation with Titanic Dataset",
"_____no_output_____"
],
[
"<img src=nb8.png height=400 width=600>\n<img src=nb9.png height=400 width=600>",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
],
[
"df = pd.read_csv(\"titanic.csv\")\ndf.head()",
"_____no_output_____"
],
[
"df.drop(['PassengerId','Name','SibSp','Parch','Ticket','Cabin','Embarked'],axis='columns',inplace=True)\ndf.head() # Dropping Unnecessary columns",
"_____no_output_____"
],
[
"inputs = df.drop('Survived',axis='columns')\ntarget = df[['Survived']]",
"_____no_output_____"
],
[
"dummies = pd.get_dummies(inputs.Sex) # Using dummy columns for sex column\ndummies.head(3)",
"_____no_output_____"
],
[
"inputs = pd.concat([inputs,dummies],axis='columns') # Concating this two dataframe\ninputs.head(3)",
"_____no_output_____"
],
[
"inputs.drop(['Sex','male'],axis='columns',inplace=True) # Dropping sex column\ninputs.head(3)",
"_____no_output_____"
],
[
"inputs.Age = inputs.Age.fillna(inputs.Age.mean())\ninputs.Pclass = inputs.Pclass.fillna(inputs.Pclass.mean())\ninputs.Fare = inputs.Fare.fillna(inputs.Fare.mean())\ntarget.Survived = target.Survived.fillna(0)",
"c:\\users\\shaon\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\pandas\\core\\generic.py:5303: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n self[name] = value\n"
],
[
"inputs.isna().sum()",
"_____no_output_____"
],
[
"target.isna().sum()",
"_____no_output_____"
],
[
"inputs.shape",
"_____no_output_____"
],
[
"target.shape",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split # Splitting\nX_train, X_test, y_train, y_test = train_test_split(inputs,target,test_size=0.2)",
"_____no_output_____"
],
[
"from sklearn.naive_bayes import GaussianNB\nmodel = GaussianNB() # Gaussian Naive Bayes Model",
"_____no_output_____"
],
[
"X_train.shape",
"_____no_output_____"
],
[
"y_train.isnull().sum()",
"_____no_output_____"
],
[
"model.fit(X_train,y_train) # Training",
"c:\\users\\shaon\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\sklearn\\naive_bayes.py:206: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n y = column_or_1d(y, warn=True)\n"
],
[
"model.score(X_test,y_test) # Accuracy",
"_____no_output_____"
],
[
"model.predict(X_test[0:10]) # Prediction",
"_____no_output_____"
],
[
"model.predict_proba(X_test[:10])",
"_____no_output_____"
],
[
"from sklearn.model_selection import cross_val_score\ncross_val_score(GaussianNB(),X_train, y_train, cv=5)",
"c:\\users\\shaon\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\sklearn\\naive_bayes.py:206: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n y = column_or_1d(y, warn=True)\nc:\\users\\shaon\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\sklearn\\naive_bayes.py:206: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n y = column_or_1d(y, warn=True)\nc:\\users\\shaon\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\sklearn\\naive_bayes.py:206: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n y = column_or_1d(y, warn=True)\nc:\\users\\shaon\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\sklearn\\naive_bayes.py:206: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n y = column_or_1d(y, warn=True)\nc:\\users\\shaon\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\sklearn\\naive_bayes.py:206: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().\n y = column_or_1d(y, warn=True)\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c55679f5c110c54f0019d6662304a99acd9205 | 79,096 | ipynb | Jupyter Notebook | SITH_Layer-master/notebooks/DeepSITH_timeSeries_testin.ipynb | gauvand/SIF_Capstone | af70966fc5d5962fa38711d20dc0e6f83451f011 | [
"MIT"
] | 2 | 2020-11-23T16:00:00.000Z | 2020-12-01T15:25:35.000Z | SITH_Layer-master/notebooks/DeepSITH_timeSeries_testin.ipynb | gauvand/SIF_Capstone | af70966fc5d5962fa38711d20dc0e6f83451f011 | [
"MIT"
] | 3 | 2020-11-22T03:56:11.000Z | 2020-12-15T03:19:04.000Z | SITH_Layer-master/notebooks/DeepSITH_timeSeries_testin.ipynb | gauvand/SIF_Capstone | af70966fc5d5962fa38711d20dc0e6f83451f011 | [
"MIT"
] | null | null | null | 155.394892 | 49,000 | 0.864544 | [
[
[
"%matplotlib inline",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nimport torch\nfrom torch import nn as nn\nfrom math import factorial\nimport random\nimport torch.nn.functional as F\nimport numpy as np\nimport seaborn as sn\nimport pandas as pd\nimport os \nfrom os.path import join\nimport glob\nfrom math import factorial\nttype = torch.cuda.DoubleTensor if torch.cuda.is_available() else torch.DoubleTensor\nprint(ttype)\nfrom sith import DeepSITH\nfrom tqdm.notebook import tqdm\nimport pickle\nsn.set_context(\"poster\")",
"<class 'torch.DoubleTensor'>\n"
],
[
"sig_lets = [\"A\",\"B\",\"C\",\"D\",\"E\",\"F\",\"G\",\"H\",]\nsignals = ttype([[0,1,1,1,0,1,1,1,0,1,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],\n [0,1,1,1,0,1,0,1,1,1,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],\n [0,1,1,1,0,1,0,1,0,1,1,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],\n [0,1,1,1,0,1,0,1,0,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],\n [0,1,0,1,1,1,0,1,1,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],\n [0,1,1,1,0,1,0,1,1,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],\n [0,1,1,1,0,1,1,1,0,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0]]\n ).view(7, 1, 1, -1)\n\nkey2id = {k:i for i, k in enumerate(sig_lets)}\n\nprint(key2id)",
"{'A': 0, 'B': 1, 'C': 2, 'D': 3, 'E': 4, 'F': 5, 'G': 6, 'H': 7}\n"
],
[
"target = ttype([[0,0,0,0,0,1,1,1,0,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],\n [0,0,0,0,0,1,1,1,0,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0],\n [0,0,0,0,0,1,1,1,0,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0],\n [0,0,0,0,0,1,1,1,0,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0],\n [0,0,0,0,0,1,1,1,0,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0],\n [0,0,0,0,0,1,1,1,0,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0],\n [0,0,0,0,0,1,1,1,0,1,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0]]).view(7, -1)\nprint(target.shape)",
"torch.Size([7, 34])\n"
],
[
"signals.shape",
"_____no_output_____"
],
[
"def train_model(model, \n signals,\n target,\n optimizer,\n loss_func,\n train_dur=2.0,\n test_durs=[1.5, 2.0, 2.5],\n epochs=1500,\n loss_buffer_size=50,\n testing_every=30):\n loss_track = {\"loss\":[],\n \"epoch\":[],\n \"acc\":[],\n \"perf\":[]}\n losses = []\n \n progress_bar = tqdm(range(int(epochs)), ncols=800)\n for e in progress_bar:\n \n for i in range(target.shape[1]):\n \n # use target one by one\n perm = target[:,i].type(torch.LongTensor)\n #print(perm.shape)\n # Zero the gradient between each batch\n model.zero_grad()\n\n \n # Present an entire batch to the model\n # indexing using -1 at the time dimension, \n # only use the latest value\n out = model(signals)[:, -1,:]\n #print(out.shape)\n # Measure loss via CrossEntropyLoss\n loss = loss_func(out, \n perm)\n\n # Adjust Weights\n loss.backward()\n optimizer.step()\n\n losses.append(loss.detach().cpu().numpy())\n \n if len(losses) > loss_buffer_size:\n losses = losses[1:]\n\n # Record loss, epoch number, batch number in epoch, \n # last accuracy measure, etc\n loss_track['loss'].append(np.mean(losses))\n loss_track['epoch'].append(e)\n\n # calculate model accuracy:\n if ((e)%testing_every == 0) & (e != 0):\n model.eval()\n perf = test_model(model, signals, target)\n model.train()\n loss_track['perf'].append(perf)\n if e > testing_every:\n # Update progress_bar\n s = \"{}: Loss: {:.6f}, Acc:{:.4f}\"\n format_list = [e, loss_track['loss'][-1]] + [perf] \n s = s.format(*format_list)\n progress_bar.set_description(s)\n if loss_track['perf'][-1] == 1.0:\n break\n return loss_track\n\ndef test_model(model, signals,target):\n # Test the Model\n out = model(signals)[:, -1, :]\n print(out)\n pred = torch.argmax(out, dim=-1)\n print(pred)\n groundTruth = target\n perf = 0 \n \n return perf",
"_____no_output_____"
]
],
[
[
"# Setup Classifier type model",
"_____no_output_____"
]
],
[
[
"class DeepSITH_Classifier(nn.Module):\n def __init__(self, out_features, layer_params, dropout=.5):\n super(DeepSITH_Classifier, self).__init__()\n last_hidden = layer_params[-1]['hidden_size']\n self.hs = DeepSITH(layer_params=layer_params, dropout=dropout)\n self.to_out = nn.Linear(last_hidden, out_features)\n def forward(self, inp):\n x = self.hs(inp)\n x = self.to_out(x)\n return x",
"_____no_output_____"
]
],
[
[
"# TEST layers for correct taustars/parameters/cvalues\nThese dictionaries will not be used later. ",
"_____no_output_____"
]
],
[
[
"sith_params2 = {\"in_features\":1, \n \"tau_min\":.1, \"tau_max\":20.0, 'buff_max':40,\n \"k\":50,\n \"ntau\":5, 'g':0, \n \"ttype\":ttype, \n \"hidden_size\":10, \"act_func\":nn.ReLU()}\nsith_params3 = {\"in_features\":sith_params2['hidden_size'], \n \"tau_min\":.1, \"tau_max\":200.0, 'buff_max':240,\n \"k\":50,\n \"ntau\":5, 'g':0, \n \"ttype\":ttype, \n \"hidden_size\":20, \"act_func\":nn.ReLU()}\nlayer_params = [sith_params2, sith_params3]\nmodel = DeepSITH_Classifier(out_features=2,\n layer_params=layer_params, dropout=.0).double()\nprint(model)\nfor i, l in enumerate(model.hs.layers):\n print(\"Layer {}\".format(i), l.sith.tau_star)\ntot_weights = 0\nfor p in model.parameters():\n tot_weights += p.numel()\nprint(\"Total Weights:\", tot_weights)",
"DeepSITH_Classifier(\n (hs): DeepSITH(\n (layers): ModuleList(\n (0): _DeepSITH_core(\n (sith): iSITH(ntau=5, tau_min=0.1, tau_max=20.0, buff_max=40, dt=1, k=50, g=0)\n (linear): Sequential(\n (0): Linear(in_features=5, out_features=10, bias=True)\n (1): ReLU()\n )\n )\n (1): _DeepSITH_core(\n (sith): iSITH(ntau=5, tau_min=0.1, tau_max=200.0, buff_max=240, dt=1, k=50, g=0)\n (linear): Sequential(\n (0): Linear(in_features=50, out_features=20, bias=True)\n (1): ReLU()\n )\n )\n )\n (dropouts): ModuleList(\n (0): Dropout(p=0.0, inplace=False)\n )\n )\n (to_out): Linear(in_features=20, out_features=2, bias=True)\n)\nLayer 0 tensor([ 0.1000, 0.3761, 1.4142, 5.3183, 20.0000], dtype=torch.float64)\nLayer 1 tensor([1.0000e-01, 6.6874e-01, 4.4721e+00, 2.9907e+01, 2.0000e+02],\n dtype=torch.float64)\nTotal Weights: 1152\n"
]
],
[
[
"# Visualize the taustar buffers\nThey must all completely empty or there will be edge effects",
"_____no_output_____"
]
],
[
[
"plt.plot(model.hs.layers[0].sith.filters[:, 0, 0, :].detach().cpu().T);",
"_____no_output_____"
]
],
[
[
"# Training and testing",
"_____no_output_____"
]
],
[
[
"# You likely don't need this to be this long, but just in case.\nepochs = 500\n\n# Just for visualizing average loss through time. \nloss_buffer_size = 100",
"_____no_output_____"
],
[
"loss_func = torch.nn.CrossEntropyLoss()",
"_____no_output_____"
],
[
"sith_params2 = {\"in_features\":1, \n \"tau_min\":.1, \"tau_max\":20.0, 'buff_max':40,\n \"k\":50,\n \"ntau\":10, 'g':0, \n \"ttype\":ttype, \n \"hidden_size\":10, \"act_func\":nn.ReLU()}\nsith_params3 = {\"in_features\":sith_params2['hidden_size'], \n \"tau_min\":.1, \"tau_max\":200.0, 'buff_max':240,\n \"k\":50,\n \"ntau\":10, 'g':0, \n \"ttype\":ttype, \n \"hidden_size\":20, \"act_func\":nn.ReLU()}\nlayer_params = [sith_params2, sith_params3]\n\nmodel = DeepSITH_Classifier(out_features=5,\n layer_params=layer_params, \n \n dropout=0.).double()\noptimizer = torch.optim.Adam(model.parameters())\nperf = train_model(model, signals, target,optimizer, loss_func,\n epochs=epochs, \n loss_buffer_size=loss_buffer_size)\n#perfs.append(perf)",
"_____no_output_____"
],
[
"with open('filename.dill', 'wb') as handle:\n pickle.dump(perf, handle, protocol=pickle.HIGHEST_PROTOCOL)",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(8,10))\nax = fig.add_subplot(2,1,1)\nax.plot(perfs[-1]['loss'])\nax.set_ylabel(\"Loss\")\n#ax.set_xlabel(\"Presentation Number\")\n\nax = fig.add_subplot(2,1,2)\ndat = pd.DataFrame(perfs[-1]['perf'])\nax.plot(np.arange(dat.shape[0])*30, dat)\nax.set_ylabel(\"Classification Acc\")\nax.set_xlabel(\"Presentation Number\")\n()\nplt.savefig(join(\"figs\",\"DeepSith_training_H8\"))",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
d0c558817c70037170ac34c361f9e4929eb9a1ee | 23,727 | ipynb | Jupyter Notebook | Tokenize.ipynb | josephkd81/SentimentAnalysis | b365ec059c109928910dad1d83ba2799f0178716 | [
"MIT"
] | null | null | null | Tokenize.ipynb | josephkd81/SentimentAnalysis | b365ec059c109928910dad1d83ba2799f0178716 | [
"MIT"
] | null | null | null | Tokenize.ipynb | josephkd81/SentimentAnalysis | b365ec059c109928910dad1d83ba2799f0178716 | [
"MIT"
] | null | null | null | 34.944035 | 7,908 | 0.333628 | [
[
[
"f = open('to_token.txt', 'r')\nlines = f.readlines()\ntexts = []\n\nfor line in lines:\n texts.append(line)\n\nf.close()",
"_____no_output_____"
]
],
[
[
"### [tokenizer in Keras](https://keras.io/preprocessing/text/)",
"_____no_output_____"
]
],
[
[
"from keras.preprocessing.text import Tokenizer\n\ntk = Tokenizer(num_words=None, char_level=True, oov_token='_UNK')\n\ntk.fit_on_texts(texts)",
"Using TensorFlow backend.\n/anaconda3/envs/keras/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: compiletime version 3.6 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.5\n return f(*args, **kwds)\n"
],
[
"vocab_token = tk.word_index",
"_____no_output_____"
],
[
"print(vocab_token)",
"{'속': 132, '개': 49, '몸': 567, '겪': 764, '물': 351, '월': 293, '글': 281, '나': 93, '넘': 553, '락': 43, '색': 470, '벗': 573, '”': 172, '협': 278, '반': 68, 'K': 91, '혐': 617, 'o': 360, '듣': 713, '품': 18, 'W': 485, '전': 28, '붐': 769, '닫': 684, '꾼': 564, '\"': 2, '發': 325, '저': 147, '왜': 358, 's': 203, '北': 623, '통': 234, '탈': 378, '밀': 344, '용': 222, '해': 139, ')': 106, '석': 179, '년': 185, '싸': 476, '제': 20, '꿈': 781, '강': 188, '식': 105, '찮': 571, '찬': 491, '뜨': 630, '왕': 786, '피': 75, '질': 276, '대': 26, '포': 128, '%': 67, '견': 283, '규': 453, '랑': 733, '광': 504, '폐': 215, '줘': 662, '간': 116, '찔': 643, '유': 39, '車': 756, '험': 516, '日': 690, '더': 257, '2': 32, '퍼': 339, '찾': 539, '혹': 562, '할': 247, '들': 181, '닝': 437, '변': 331, '젤': 301, '디': 308, '훨': 514, '턴': 725, '접': 555, 'n': 492, '딛': 518, '될': 452, '‧': 400, '없': 277, '땀': 731, 'l': 509, '램': 759, '본': 376, '율': 367, '종': 57, '둔': 426, '업': 76, 'A': 190, '기': 19, '러': 220, '투': 31, '힘': 668, '까': 144, 'v': 238, '합': 243, '례': 612, '착': 377, '응': 392, '정': 124, '폭': 192, '굿': 561, '네': 349, 'D': 149, '키': 248, '처': 217, '“': 161, '장': 53, '돼': 603, '똥': 601, '약': 5, 'f': 734, '흐': 461, '맹': 569, '빅': 416, '둘': 624, '과': 140, '중': 51, '단': 82, '칼': 558, '덮': 519, '팔': 444, '암': 218, '환': 318, '술': 332, '콤': 484, '천': 114, '울': 282, '많': 551, '코': 37, '쑥': 651, '복': 255, ',': 7, '생': 356, '누': 346, '화': 154, '격': 198, '밭': 526, '버': 272, 'R': 134, '텍': 513, '④': 657, '갑': 701, '추': 94, '흘': 425, '↓': 141, 'Ⅱ': 658, '쥬': 751, '레': 221, '족': 736, '=': 655, '난': 269, '소': 89, 'V': 382, '證': 150, '밋': 783, '뮨': 793, '모': 177, '막': 396, '결': 235, '략': 381, '침': 398, '쟁': 270, '너': 365, '껑': 387, '싹': 685, '때': 307, '–': 777, '원': 74, 'h': 583, '펀': 251, '왔': 752, '0': 35, '흙': 750, '티': 302, '봐': 633, '패': 463, '펴': 535, '만': 58, '예': 316, '능': 342, '꾸': 605, '융': 239, '및': 646, '핀': 557, '류': 451, 'Q': 284, '촉': 789, '랜': 589, '얹': 771, '8': 136, '숭': 626, '페': 541, '웃': 487, '카': 404, '픽': 320, '않': 497, '쾌': 667, '곳': 209, '쓸': 600, '려': 189, '텔': 223, '社': 641, '쌓': 738, '팜': 794, '3': 80, '묘': 712, '계': 96, '춤': 424, '로': 42, '낮': 345, '현': 137, '송': 493, '옐': 768, '염': 604, '짓': 575, '!': 323, '케': 208, '청': 231, '잇': 347, '뷰': 428, '런': 660, '맑': 581, 'B': 173, '병': 727, '료': 253, '하': 30, '짝': 613, '팡': 638, '한': 8, '엎': 572, '마': 126, '엔': 394, '서': 148, '혁': 333, '딜': 500, '꽃': 726, '電': 627, '튄': 791, '권': 59, '퀵': 211, '→': 774, '철': 609, '부': 123, '란': 184, '웨': 659, '이': 3, '뱅': 664, '잡': 486, '깜': 517, '꽁': 594, \"'\": 9, '국': 55, '혜': 384, '_UNK': 800, '빨': 636, '래': 143, '닉': 246, '배': 279, '터': 175, '률': 359, '증': 34, '니': 258, 'k': 489, '블': 224, '외': 115, '쥐': 682, '농': 512, '늘': 340, '치': 122, '확': 176, '극': 568, '집': 390, '독': 578, '편': 511, '랠': 693, '경': 129, '줌': 637, '졌': 436, '드': 130, '안': 180, '.': 4, '듭': 724, '있': 418, '났': 408, '-': 41, '구': 213, '솔': 457, '받': 625, '‘': 61, '탁': 536, '못': 403, '훈': 465, '혼': 501, '교': 780, '픈': 699, '충': 249, '설': 233, '놀': 708, '달': 275, 'M': 314, '럼': 610, '넷': 718, '7': 146, '좋': 494, '/': 417, 'P': 201, '커': 286, '애': 554, '았': 366, '밑': 608, '↑': 228, '백': 433, '방': 259, 'i': 415, '휘': 289, '벤': 354, '순': 135, '남': 338, '떠': 559, '큼': 799, '건': 207, '몰': 760, 'N': 151, '십': 315, '뒤': 621, '고': 77, '머': 483, '떨': 432, '친': 391, '됐': 640, '득': 697, '좌': 782, '링': 723, '클': 542, '토': 442, '답': 676, '첫': 515, '롱': 691, '&': 142, '돈': 556, '희': 454, '뭉': 795, '렁': 341, '플': 326, '절': 352, '찍': 587, '와': 295, '홈': 775, '멈': 761, '삼': 47, '멘': 321, '選': 591, '엠': 545, '값': 529, '주': 6, '된': 313, '셀': 97, '넥': 475, '덕': 788, '북': 236, '밸': 586, '책': 687, '림': 413, '맨': 592, '눈': 429, '브': 393, '등': 86, '켓': 427, '괜': 570, '는': 45, '습': 479, '악': 195, '덜': 355, '봇': 620, '시': 27, '쇠': 674, '내': 166, 'E': 87, '께': 744, '언': 310, '상': 25, '묶': 757, '골': 194, '테': 297, '富': 606, '욕': 728, '직': 111, '록': 389, '체': 226, '릴': 585, '뛰': 563, '필': 402, '번': 304, '운': 262, '떻': 634, '·': 21, '검': 472, '메': 375, '뛴': 790, '요': 271, '선': 113, '라': 108, '문': 232, '뇨': 566, '덤': 755, '활': 421, '창': 379, '+': 202, '르': 230, '쏟': 598, '닌': 719, '・': 695, '王': 677, '취': 499, '먹': 449, '냉': 670, '헤': 706, '춘': 632, '1': 48, '논': 187, '다': 73, '꿋': 597, 'H': 159, 'U': 242, '슈': 216, '_': 183, '세': 64, '일': 81, '수': 36, '톱': 374, '크': 205, '린': 348, '워': 410, '면': 273, '손': 405, '땐': 650, '中': 327, 'I': 182, '닷': 614, '6': 117, '심': 197, '파': 133, '신': 44, '발': 52, '획': 714, '야': 290, '되': 263, '붙': 548, '켠': 654, '회': 100, '롯': 722, '불': 171, '람': 395, '옵': 533, '미': 13, '잘': 534, '짐': 686, '톤': 508, '태': 254, '깃': 753, '’': 66, '각': 361, '항': 268, '휴': 287, '삭': 210, '비': 83, '냐': 552, '룹': 498, '아': 120, '슨': 595, '낙': 635, '팀': 665, '담': 300, '근': 373, '?': 88, '진': 110, '돌': 330, 'O': 168, '텀': 357, '5': 60, '콩': 709, '뱃': 766, '톡': 362, '량': 312, '을': 285, '던': 369, '루': 446, ':': 455, '랩': 607, '보': 72, '뜬': 797, '째': 422, '①': 550, '법': 785, '판': 296, '그': 380, '거': 118, '인': 79, '뢰': 490, '익': 167, '겹': 546, '셋': 383, '빗': 577, '行': 720, '밝': 772, '같': 631, '맞': 507, '샜': 735, '실': 69, '따': 411, '숨': 531, '룡': 698, '자': 23, '먼': 645, '듯': 576, '눌': 647, '빈': 702, '오': 11, 'e': 420, '돋': 435, '를': 458, 'g': 510, '바': 10, '성': 29, '붕': 412, '큰': 532, '맘': 747, '입': 406, '앞': 473, '위': 99, '후': 294, '빼': 565, '빠': 678, ']': 16, '옳': 679, '딘': 656, '칩': 652, '②': 431, '닥': 112, '했': 322, '쳐': 579, '섬': 622, '리': 24, '첩': 462, '(': 103, '볼': 593, '영': 164, '슷': 611, '감': 92, '꺽': 746, '게': 319, '룩': 522, '美': 240, '롭': 540, '호': 131, '헬': 227, '곡': 762, '김': 588, '조': 121, '[': 15, '가': 17, '망': 169, '썩': 481, '형': 229, '녹': 334, '올': 84, '여': 199, '박': 538, '턱': 466, '져': 305, '렸': 480, '즉': 663, '쳤': 796, '무': 138, '압': 615, '外': 443, '공': 95, '긴': 502, '짧': 681, ';': 386, 'S': 63, '찰': 496, '평': 299, '적': 50, '력': 214, '튈': 688, '섹': 729, '文': 675, '벨': 528, 'c': 488, '콘': 648, '쏠': 478, 'Y': 291, '타': 98, '으': 237, '산': 153, '…': 14, '느': 696, '즌': 438, '긍': 419, '끝': 264, '낀': 628, '낸': 574, '맥': 459, 'F': 371, '잔': 439, '련': 477, '새': 434, '름': 311, '지': 40, '출': 156, '역': 212, '션': 399, '온': 109, '튀': 740, '민': 363, '벌': 292, '축': 464, '갇': 666, '탄': 266, '죈': 716, '풍': 303, '작': 324, '널': 602, '명': 364, '액': 337, '탓': 537, '릭': 543, '분': 85, '살': 288, '학': 328, '징': 162, '힌': 527, '빛': 784, '동': 160, '향': 157, '젠': 250, '흔': 441, '렇': 703, '랐': 368, '노': 298, '의': 101, '음': 329, '어': 107, '칭': 661, '앉': 506, '임': 170, '③': 721, '특': 155, '움': 471, '롤': 467, '쓰': 669, '잠': 544, '뎌': 763, '짜': 456, '양': 178, '탠': 653, '사': 56, '뻥': 739, '윤': 584, '푸': 765, '强': 649, '쉼': 673, 'X': 495, '괴': 397, '두': 219, '숏': 692, '령': 353, '틈': 707, '韓': 521, '데': 204, '트': 54, '털': 525, '날': 401, '었': 317, '객': 618, '매': 38, '최': 206, 'L': 244, '9': 174, '급': 127, '베': 385, '퇴': 524, '초': 370, '쇼': 267, '허': 469, '표': 65, '귀': 779, '뚝': 580, '총': 158, '꼬': 683, '칠': 590, '갔': 730, '|': 742, ' ': 1, '멀': 715, '재': 104, '뽑': 754, '욱': 503, '차': 191, '4': 78, '센': 423, '닙': 776, '점': 225, '홍': 616, '도': 46, '것': 582, '옥': 265, 'T': 71, 'G': 200, '프': 193, '엇': 409, '연': 125, '완': 414, '財': 741, '효': 350, '싼': 798, 'ㆍ': 152, '늪': 671, '열': 336, '얼': 440, '뿔': 705, '뿐': 596, '씨': 694, '킹': 710, '히': 335, '∙': 245, '삐': 745, '당': 260, '함': 743, '닭': 619, '준': 241, '우': 165, '셜': 704, '콜': 447, '금': 102, '굳': 644, '길': 372, '늬': 749, '승': 119, 'C': 163, '갈': 280, '별': 309, '뉴': 90, 'J': 448, '즈': 767, '목': 33, '`': 306, '벼': 732, '립': 737, '겨': 689, '긋': 792, '웅': 468, '풀': 711, '뀐': 748, '또': 445, '株': 62, '싶': 530, '른': 430, '엄': 523, '잃': 549, '알': 407, '억': 261, 't': 252, '홉': 700, '숙': 758, '론': 256, '人': 482, '관': 145, '높': 450, 'p': 505, '떤': 787, '택': 778, '스': 12, '틀': 460, '갯': 672, '믿': 599, '든': 629, '끼': 770, '줄': 196, '에': 22, '핑': 520, '참': 560, '템': 388, '엘': 474, '은': 70, '~': 717, '걸': 343, '행': 186, '컨': 547, '깐': 642, '황': 274, '꺼': 639, '잉': 773, '딱': 680}\n"
],
[
"vocab_token_list = list(vocab_token)",
"_____no_output_____"
],
[
"len(vocab_token_list)",
"_____no_output_____"
],
[
"vocab_token_list.append('_PAD')",
"_____no_output_____"
],
[
"len(vocab_token_list)",
"_____no_output_____"
],
[
"idx = []\nidx_ = -1\n\nfor i in range(len(vocab_token_list)):\n idx_ += 1\n idx.append(idx_)\n \n#df_idx = pd.DataFrame(idx, columns=['token_idx'])\n\n#df_vocab_token = pd.merge(data, df_idx, columns=['char', 'idx'])",
"_____no_output_____"
],
[
"len(idx)",
"_____no_output_____"
],
[
"import pandas as pd\n\ndf = pd.DataFrame(vocab_token_list, columns=['char'])\ndf_idx = pd.DataFrame(idx, columns=['idx'])\n\ndf_vocab_token = pd.concat([df, df_idx], axis=1)",
"_____no_output_____"
],
[
"df_vocab_token",
"_____no_output_____"
],
[
"import csv\n\ndf_vocab_token.to_csv(r'Vocab.token.txt', header=None, index=None, sep='\\t', mode='a')",
"_____no_output_____"
]
],
[
[
"- _PAD 와 _UNK 는 수작업으로 각각 1, 0 으로 바꿔주었음. \n- 파일명을 token.vocab.txt 로 바꿔줌",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown"
] | [
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
d0c55a7f3b82e71f14a7929e3278011bfad6aca6 | 7,323 | ipynb | Jupyter Notebook | feature_extraction_diagrams/A20_BSS_BlindSourceSeperation/ICA_2_out_channel.ipynb | BA-HanseML/NF_Prj_MIMII_Dataset | c9dd130a48c5ee28491a3f9369ace8f7217753d6 | [
"MIT"
] | 10 | 2020-08-25T21:12:32.000Z | 2021-11-04T22:14:37.000Z | feature_extraction_diagrams/A20_BSS_BlindSourceSeperation/ICA_2_out_channel.ipynb | BA-HanseML/NF_Prj_MIMII_Dataset | c9dd130a48c5ee28491a3f9369ace8f7217753d6 | [
"MIT"
] | 44 | 2020-05-04T11:37:55.000Z | 2021-09-26T04:12:23.000Z | feature_extraction_diagrams/A20_BSS_BlindSourceSeperation/ICA_2_out_channel.ipynb | ArneSch/NF_Prj_MIMII_Dataset | c9dd130a48c5ee28491a3f9369ace8f7217753d6 | [
"MIT"
] | 4 | 2020-11-24T02:14:13.000Z | 2021-07-01T08:52:59.000Z | 24.491639 | 149 | 0.562201 | [
[
[
"# ICA 2 channel\nnaive apporche that 8 recordings of only machine and backgroudn noise and picking of main source by analysing the estiamtion of demixing matrix",
"_____no_output_____"
]
],
[
[
"BASE_FOLDER = '../../'\n%run -i ..\\..\\utility\\feature_extractor\\JupyterLoad_feature_extractor.py\nfile_path = r'\\dataset\\6dB\\pump\\id_02\\abnormal\\00000004.wav'\n# load\nica2 = feature_extractor_ICA2(BASE_FOLDER)\nica2.create_from_wav(file_path)",
"load feature_extractor_mother\nload feature_extractor_mel_spectra\nload feature_extractor_psd\nload feature_extractor_ICA2\nload feature_extractore_pre_nnFilterDenoise\nload extractor_diagram_mother\n"
],
[
"# get flat feature mix matrix\nica2.get_feature({'function': 'flat'}).shape",
"_____no_output_____"
],
[
"# get max range in mix matrix\nica2.get_feature({'function': 'maxrange'})",
"_____no_output_____"
],
[
"# get two channel seperation back\nwmf = ica2.get_wav_memory_file()\nwmf.channel.shape",
"_____no_output_____"
],
[
"# get the main channel of seperation back ( the main channel is the one with most mixing range)\nwmf = ica2.get_wav_memory_file(True)\nwmf.channel.shape",
"_____no_output_____"
],
[
"# Some examples\nfile_path = r'\\dataset\\6dB\\pump\\id_00\\abnormal\\00000004.wav'\nica2 = extractor_diagram_ICA2(BASE_FOLDER)\nica2.create_from_wav(file_path)\nprint(file_path)\nprint(ica2.get_feature({'function': 'maxrange'}))\n\nfile_path = r'\\dataset\\6dB\\pump\\id_00\\normal\\00000004.wav'\nica2 = extractor_diagram_ICA2(BASE_FOLDER)\nica2.create_from_wav(file_path)\nprint(file_path)\nprint(ica2.get_feature({'function': 'maxrange'}))",
"\\dataset\\6dB\\pump\\id_00\\abnormal\\00000004.wav\n5.982928881967606\n\\dataset\\6dB\\pump\\id_00\\normal\\00000004.wav\n2.3690641155381478\n"
],
[
"file_path = r'\\dataset\\6dB\\fan\\id_02\\abnormal\\00000004.wav'\nica2 = extractor_diagram_ICA2(BASE_FOLDER)\nica2.create_from_wav(file_path)\nprint(file_path)\nprint(ica2.get_feature({'function': 'maxrange'}))\n\nfile_path = r'\\dataset\\6dB\\fan\\id_02\\normal\\00000004.wav'\nica2 = extractor_diagram_ICA2(BASE_FOLDER)\nica2.create_from_wav(file_path)\nprint(file_path)\nprint(ica2.get_feature({'function': 'maxrange'}))",
"\\dataset\\6dB\\fan\\id_02\\abnormal\\00000004.wav\n1.7735997625722593\n\\dataset\\6dB\\fan\\id_02\\normal\\00000004.wav\n3.667187365257009\n"
],
[
"file_path = r'\\dataset\\6dB\\slider\\id_04\\abnormal\\00000004.wav'\nica2 = extractor_diagram_ICA2(BASE_FOLDER)\nica2.create_from_wav(file_path)\nprint(file_path)\nprint(ica2.get_feature({'function': 'maxrange'}))\n\nfile_path = r'\\dataset\\6dB\\slider\\id_04\\normal\\00000004.wav'\nica2 = extractor_diagram_ICA2(BASE_FOLDER)\nica2.create_from_wav(file_path)\nprint(file_path)\nprint(ica2.get_feature({'function': 'maxrange'}))",
"\\dataset\\6dB\\slider\\id_04\\abnormal\\00000004.wav\n2.8879229332571557\n\\dataset\\6dB\\slider\\id_04\\normal\\00000004.wav\n2.819371177558146\n"
]
]
] | [
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
d0c56d60096fd1f0dd2f1a489d08de8b1980c989 | 27,258 | ipynb | Jupyter Notebook | notebooks/vegetation/NDVI_STD.ipynb | admariner/data_cube_notebooks | 984a84b2f92114040e36a533d3f476dcf384695e | [
"Apache-2.0"
] | null | null | null | notebooks/vegetation/NDVI_STD.ipynb | admariner/data_cube_notebooks | 984a84b2f92114040e36a533d3f476dcf384695e | [
"Apache-2.0"
] | null | null | null | notebooks/vegetation/NDVI_STD.ipynb | admariner/data_cube_notebooks | 984a84b2f92114040e36a533d3f476dcf384695e | [
"Apache-2.0"
] | null | null | null | 34.856777 | 575 | 0.58603 | [
[
[
"<a id=\"ndvi_std_top\"></a>\n# NDVI STD\n\nDeviations from an established average z-score. \n\n<hr> \n \n# Notebook Summary\n\n* A baseline for each month is determined by measuring NDVI over a set time\n* The data cube is used to visualize at NDVI anomalies over time.\n* Anomalous times are further explored and visualization solutions are proposed.\n\n<hr> \n\n# Index \n\n* [Import Dependencies and Connect to the Data Cube](#ndvi_std_import)\n* [Choose Platform and Product](#ndvi_std_plat_prod)\n* [Get the Extents of the Cube](#ndvi_std_extents)\n* [Define the Extents of the Analysis](#ndvi_std_define_extents)\n* [Load Data from the Data Cube](#ndvi_std_load_data)\n* [Create and Use a Clean Mask](#ndvi_std_clean_mask)\n* [Calculate the NDVI](#ndvi_std_calculate)\n* [Convert the Xarray to a Dataframe](#ndvi_std_pandas)\n* [Define a Function to Visualize Values Over the Region](#ndvi_std_visualization_function)\n* [Visualize the Baseline Average NDVI by Month](#ndvi_std_baseline_mean_ndvi)\n* [Visualize the Baseline Distributions Binned by Month](#ndvi_std_boxplot_analysis)\n* [Visualize the Baseline Kernel Distributions Binned by Month](#ndvi_std_violinplot_analysis)\n* [Plot Z-Scores by Month and Year](#ndvi_std_pixelplot_analysis)\n* [Further Examine Times Of Interest](#ndvi_std_heatmap_analysis)\n\n<hr> \n\n# How It Works\n\nTo detect changes in plant life, we use a measure called NDVI. \n* <font color=green>NDVI</font> is the ratio of the difference between amount of near infrared light <font color=red>(NIR)</font> and red light <font color=red>(RED)</font> divided by their sum.\n<br>\n\n$$ NDVI = \\frac{(NIR - RED)}{(NIR + RED)}$$ \n\n<br>\n<div class=\"alert-info\">\nThe idea is to observe how much red light is being absorbed versus reflected. Photosynthetic plants absorb most of the visible spectrum's wavelengths when they are healthy. When they aren't healthy, more of that light will get reflected. This makes the difference between <font color=red>NIR</font> and <font color=red>RED</font> much smaller which will lower the <font color=green>NDVI</font>. The resulting values from doing this over several pixels can be used to create visualizations for the changes in the amount of photosynthetic vegetation in large areas.\n</div>",
"_____no_output_____"
],
[
"## <span id=\"ndvi_std_import\">Import Dependencies and Connect to the Data Cube [▴](#ndvi_std_top) </span> ",
"_____no_output_____"
]
],
[
[
"import sys\nimport os\nsys.path.append(os.environ.get('NOTEBOOK_ROOT'))\n\nimport time\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nfrom matplotlib.ticker import FuncFormatter\nimport seaborn as sns\n\nfrom utils.data_cube_utilities.dc_load import get_product_extents\nfrom utils.data_cube_utilities.dc_display_map import display_map\nfrom utils.data_cube_utilities.clean_mask import landsat_clean_mask_full\n\nfrom datacube.utils.aws import configure_s3_access\nconfigure_s3_access(requester_pays=True)\n\nimport datacube\nfrom utils.data_cube_utilities.data_access_api import DataAccessApi\napi = DataAccessApi()\ndc = api.dc",
"_____no_output_____"
]
],
[
[
"## <span id=\"ndvi_std_plat_prod\">Choose Platform and Product [▴](#ndvi_std_top)</span>",
"_____no_output_____"
]
],
[
[
"# Change the data platform and data cube here\n\nproduct = 'ls7_usgs_sr_scene'\nplatform = 'LANDSAT_7'\ncollection = 'c1'\nlevel = 'l2'\n\n# product = 'ls8_usgs_sr_scene'\n# platform = 'LANDSAT_8'\n# collection = 'c1'\n# level = 'l2'",
"_____no_output_____"
]
],
[
[
"## <span id=\"ndvi_std_extents\">Get the Extents of the Cube [▴](#ndvi_std_top)</span>",
"_____no_output_____"
]
],
[
[
"full_lat, full_lon, min_max_dates = get_product_extents(api, platform, product)\n\nprint(\"{}:\".format(platform))\nprint(\"Lat bounds:\", full_lat)\nprint(\"Lon bounds:\", full_lon)\nprint(\"Time bounds:\", min_max_dates)",
"_____no_output_____"
]
],
[
[
"## <span id=\"ndvi_std_define_extents\">Define the Extents of the Analysis [▴](#ndvi_std_top)</span>",
"_____no_output_____"
]
],
[
[
"display_map(full_lat, full_lon)",
"_____no_output_____"
],
[
"params = {'latitude': (0.55, 0.6),\n 'longitude': (35.55, 35.5),\n 'time': ('2008-01-01', '2010-12-31')}",
"_____no_output_____"
],
[
"display_map(params[\"latitude\"], params[\"longitude\"])",
"_____no_output_____"
]
],
[
[
"## <span id=\"ndvi_std_load_data\">Load Data from the Data Cube [▴](#ndvi_std_top)</span>",
"_____no_output_____"
]
],
[
[
"dataset = dc.load(**params,\n platform = platform,\n product = product,\n measurements = ['red', 'green', 'blue', 'swir1', 'swir2', 'nir', 'pixel_qa'],\n dask_chunks={'time':1, 'latitude':1000, 'longitude':1000}).persist()",
"_____no_output_____"
]
],
[
[
"## <span id=\"ndvi_std_clean_mask\">Create and Use a Clean Mask [▴](#ndvi_std_top)</span>",
"_____no_output_____"
]
],
[
[
"# Make a clean mask to remove clouds and scanlines.\nclean_mask = landsat_clean_mask_full(dc, dataset, product=product, platform=platform, \n collection=collection, level=level)\n\n\n# Filter the scenes with that clean mask\ndataset = dataset.where(clean_mask)",
"_____no_output_____"
]
],
[
[
"## <span id=\"ndvi_std_calculate\">Calculate the NDVI [▴](#ndvi_std_top)</span>",
"_____no_output_____"
]
],
[
[
"#Calculate NDVI\nndvi = (dataset.nir - dataset.red)/(dataset.nir + dataset.red)",
"_____no_output_____"
]
],
[
[
"## <span id=\"ndvi_std_pandas\">Convert the Xarray to a Dataframe [▴](#ndvi_std_top)</span>",
"_____no_output_____"
]
],
[
[
"#Cast to pandas dataframe\ndf = ndvi.to_dataframe(\"NDVI\")\n\n#flatten the dimensions since it is a compound hierarchical dataframe\ndf = df.stack().reset_index()\n\n#Drop the junk column that was generated for NDVI\ndf = df.drop([\"level_3\"], axis=1)\n\n#Preview first 5 rows to make sure everything looks as it should\ndf.head()",
"_____no_output_____"
],
[
"#Rename the NDVI column to the appropriate name\ndf = df.rename(index=str, columns={0: \"ndvi\"})\n\n#clamp NDVI between 0 and 1\ndf.ndvi = df.ndvi.clip(lower=0)\n\n#Add columns for Month and Year for convenience\ndf[\"Month\"] = df.time.dt.month\ndf[\"Year\"] = df.time.dt.year\n\n#Preview changes\ndf.head()",
"_____no_output_____"
]
],
[
[
"## <span id=\"ndvi_std_visualization_function\">Define a Function to Visualize Values Over the Region [▴](#ndvi_std_top)</span>",
"_____no_output_____"
]
],
[
[
"#Create a function for formatting our axes\ndef format_axis(axis, digits = None, suffix = \"\"):\n \n #Get Labels\n labels = axis.get_majorticklabels()\n \n #Exit if empty\n if len(labels) == 0: return\n \n #Create formatting function\n format_func = lambda x, pos: \"{0}{1}\".format(labels[pos]._text[:digits],suffix)\n \n #Use formatting function\n axis.set_major_formatter(FuncFormatter(format_func))\n \n\n#Create a function for examining the z-score and NDVI of the region graphically\ndef examine(month = list(df[\"time\"].dt.month.unique()), year = list(df[\"time\"].dt.year.unique()), value_name = \"z_score\"):\n \n #This allows the user to pass single floats as values as well\n if type(month) is not list: month = [month]\n if type(year) is not list: year = [year]\n \n #pivoting the table to the appropriate layout\n piv = pd.pivot_table(df[df[\"time\"].dt.year.isin(year) & df[\"time\"].dt.month.isin(month)],\n values=value_name,index=[\"latitude\"], columns=[\"longitude\"])\n \n #Sizing\n plt.rcParams[\"figure.figsize\"] = [11,11]\n \n #Plot pivot table as heatmap using seaborn\n val_range = (-1.96,1.96) if value_name is \"z_score\" else (df[value_name].unique().min(),df[value_name].unique().max())\n ax = sns.heatmap(piv, square=False, cmap=\"RdYlGn\",vmin=val_range[0],vmax=val_range[1], center=0)\n\n #Formatting \n format_axis(ax.yaxis, 6)\n format_axis(ax.xaxis, 7) \n plt.setp(ax.xaxis.get_majorticklabels(), rotation=90 )\n plt.gca().invert_yaxis()",
"_____no_output_____"
]
],
[
[
"Lets examine the average <font color=green>NDVI</font> across all months and years to get a look at the region",
"_____no_output_____"
]
],
[
[
"#It defaults to binning the entire range of months and years so we can just leave those parameters out\nexamine(value_name=\"ndvi\")",
"_____no_output_____"
]
],
[
[
"This gives us an idea of the healthier areas of the region before we start looking at specific months and years.",
"_____no_output_____"
],
[
"## <span id=\"ndvi_std_baseline_mean_ndvi\">Visualize the Baseline Average NDVI by Month [▴](#ndvi_std_top)</span>",
"_____no_output_____"
]
],
[
[
"#Make labels for convenience\nlabels = [\"Jan\", \"Feb\", \"Mar\", \"Apr\", \"May\", \"Jun\", \"Jul\", \"Aug\", \"Sep\", \"Oct\", \"Nov\", \"Dec\"]\n\n#Initialize an empty pandas Series\ndf[\"z_score\"] = pd.Series()\n\n#declare list for population\nbinned_data = list()\n\n#Calculate monthly binned z-scores from the composited monthly NDVI mean and store them\nfor i in range(12):\n \n #grab z_score and NDVI for the appropriate month\n temp = df[[\"z_score\", \"ndvi\"]][df[\"Month\"] == i+1]\n \n #populate z_score\n df.loc[df[\"Month\"] == i+1,\"z_score\"] = (temp[\"ndvi\"] - temp[\"ndvi\"].mean())/temp[\"ndvi\"].std(ddof=0)\n \n #print the month next to its mean NDVI and standard deviation\n binned_data.append((labels[i], temp[\"ndvi\"].mean(), temp[\"ndvi\"].std()))\n\n#Create dataframe for binned values\nbinned_data = pd.DataFrame.from_records(binned_data, columns=[\"Month\",\"Mean\", \"Std_Dev\"])\n \n#print description for clarification\nprint(\"Monthly Average NDVI over Baseline Period\")\n\n#display binned data\nbinned_data",
"_____no_output_____"
]
],
[
[
"## <span id=\"ndvi_std_boxplot_analysis\">Visualize the Baseline Distributions Binned by Month [▴](#ndvi_std_top)</span>",
"_____no_output_____"
]
],
[
[
"#Set figure size to a larger size\nplt.rcParams[\"figure.figsize\"] = [16,9]\n\n#Create the boxplot\ndf.boxplot(by=\"Month\",column=\"ndvi\")\n\n#Create the mean line\nplt.plot(binned_data.index+1, binned_data.Mean, 'r-')\n\n#Create the one standard deviation away lines\nplt.plot(binned_data.index+1, binned_data.Mean-binned_data.Std_Dev, 'b--')\nplt.plot(binned_data.index+1, binned_data.Mean+binned_data.Std_Dev, 'b--')\n\n#Create the two standard deviations away lines\nplt.plot(binned_data.index+1, binned_data.Mean-(2*binned_data.Std_Dev), 'g-.', alpha=.3)\nplt.plot(binned_data.index+1, binned_data.Mean+(2*binned_data.Std_Dev), 'g-.', alpha=.3)",
"_____no_output_____"
]
],
[
[
"The plot above shows the distributions for each individual month over the baseline period.\n<br>\n- The <b><font color=red>red</font></b> line is the mean line which connects the <b><em>mean values</em></b> for each month. \n <br>\n- The dotted <b><font color=blue>blue</font></b> lines are exactly <b><em>one standard deviation away</em></b> from the mean and show where the NDVI values fall within 68% of the time, according to the Empirical Rule. \n <br>\n- The <b><font color=green>green</font></b> dotted lines are <b><em>two standard deviations away</em></b> from the mean and show where an estimated 95% of the NDVI values are contained for that month.\n<br>\n\n<div class=\"alert-info\"><font color=black> <em><b>NOTE: </b>You will notice a seasonal trend in the plot above. If we had averaged the NDVI without binning, this trend data would be lost and we would end up comparing specific months to the average derived from all the months combined, instead of individually.</em></font>\n</div>",
"_____no_output_____"
],
[
"## <span id=\"ndvi_std_violinplot_analysis\">Visualize the Baseline Kernel Distributions Binned by Month [▴](#ndvi_std_top)</span>\nThe violinplot has the advantage of allowing us to visualize kernel distributions but comes at a higher computational cost.",
"_____no_output_____"
]
],
[
[
"sns.violinplot(x=df.Month, y=\"ndvi\", data=df)",
"_____no_output_____"
]
],
[
[
"<hr> \n\n## <span id=\"ndvi_std_pixelplot_analysis\">Plot Z-Scores by Month and Year [▴](#ndvi_std_top)</span>",
"_____no_output_____"
],
[
"### Pixel Plot Visualization",
"_____no_output_____"
]
],
[
[
"#Create heatmap layout from dataframe\nimg = pd.pivot_table(df, values=\"z_score\",index=[\"Month\"], columns=[\"Year\"], fill_value=None)\n\n#pass the layout to seaborn heatmap\nax = sns.heatmap(img, cmap=\"RdYlGn\", annot=True, fmt=\"f\", center = 0)\n\n#set the title for Aesthetics\nax.set_title('Z-Score\\n Regional Selection Averages by Month and Year')\nax.fill= None",
"_____no_output_____"
]
],
[
[
"Each block in the visualization above is representative of the deviation from the average for the region selected in a specific month and year. The omitted blocks are times when there was no satellite imagery available. Their values must either be inferred, ignored, or interpolated.\n\nYou may notice long vertical strips of red. These are strong indications of drought since they deviate from the baseline consistently over a long period of time. ",
"_____no_output_____"
],
[
"## <span id=\"ndvi_std_heatmap_analysis\">Further Examine Times Of Interest [▴](#ndvi_std_top)</span>",
"_____no_output_____"
],
[
"### Use the function we created to examine times of interest",
"_____no_output_____"
]
],
[
[
"#Lets look at that drought in 2009 during the months of Aug-Oct\n\n#This will generate a composite of the z-scores for the months and years selected\nexamine(month = [8], year = 2009, value_name=\"z_score\")",
"_____no_output_____"
]
],
[
[
"Note:\nThis graphical representation of the region shows the amount of deviation from the mean for each pixel that was binned by month.",
"_____no_output_____"
],
[
"### Grid Layout of Selected Times",
"_____no_output_____"
]
],
[
[
"#Restrict input to a maximum of about 12 grids (months*year) for memory\ndef grid_examine(month = None, year = None, value_name = \"z_score\"):\n \n #default to all months then cast to list, if not already\n if month is None: month = list(df[\"Month\"].unique())\n elif type(month) is int: month = [month]\n\n #default to all years then cast to list, if not already\n if year is None: year = list(df[\"Year\"].unique())\n elif type(year) is int: year = [year]\n\n #get data within the bounds specified\n data = df[np.logical_and(df[\"Month\"].isin(month) , df[\"Year\"].isin(year))]\n \n #Set the val_range to be used as the vertical limit (vmin and vmax)\n val_range = (-1.96,1.96) if value_name is \"z_score\" else (df[value_name].unique().min(),df[value_name].unique().max())\n \n #create colorbar to export and use on grid\n Z = [[val_range[0],0],[0,val_range[1]]]\n CS3 = plt.contourf(Z, 200, cmap=\"RdYlGn\")\n plt.clf() \n \n \n #Define facet function to use for each tile in grid\n def heatmap_facet(*args, **kwargs):\n data = kwargs.pop('data')\n img = pd.pivot_table(data, values=value_name,index=[\"latitude\"], columns=[\"longitude\"], fill_value=None)\n \n ax = sns.heatmap(img, cmap=\"RdYlGn\",vmin=val_range[0],vmax=val_range[1],\n center = 0, square=True, cbar=False, mask = img.isnull())\n\n plt.setp(ax.xaxis.get_majorticklabels(), rotation=90 )\n plt.gca().invert_yaxis()\n \n \n #Create grid using the face function above\n with sns.plotting_context(font_scale=5.5):\n g = sns.FacetGrid(data, col=\"Year\", row=\"Month\", height=5,sharey=True, sharex=True) \n mega_g = g.map_dataframe(heatmap_facet, \"longitude\", \"latitude\") \n g.set_titles(col_template=\"Yr= {col_name}\", fontweight='bold', fontsize=18) \n \n #Truncate axis tick labels using the format_axis function defined in block 13\n for ax in g.axes:\n format_axis(ax[0]._axes.yaxis, 6)\n format_axis(ax[0]._axes.xaxis, 7)\n \n #create a colorbox and apply the exported colorbar\n cbar_ax = g.fig.add_axes([1.015,0.09, 0.015, 0.90])\n cbar = plt.colorbar(cax=cbar_ax, mappable=CS3)",
"_____no_output_____"
],
[
"grid_examine(month=[8,9,10], year=[2008,2009,2010])",
"_____no_output_____"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
]
] |
d0c572769240149a9b04fb9404673a0b8d7d2c97 | 15,290 | ipynb | Jupyter Notebook | Gardenkiak/Programazioa/Untitled.ipynb | mpenagar/Konputaziorako-Sarrera | 1f276cbda42e9d3d0beb716249fadbad348533d7 | [
"MIT"
] | null | null | null | Gardenkiak/Programazioa/Untitled.ipynb | mpenagar/Konputaziorako-Sarrera | 1f276cbda42e9d3d0beb716249fadbad348533d7 | [
"MIT"
] | null | null | null | Gardenkiak/Programazioa/Untitled.ipynb | mpenagar/Konputaziorako-Sarrera | 1f276cbda42e9d3d0beb716249fadbad348533d7 | [
"MIT"
] | null | null | null | 120.393701 | 12,756 | 0.883126 | [
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nplt.ion()\n\n\"Hello world\"",
"_____no_output_____"
],
[
"# of course you can show figures\n\ndef polynom(x):\n return 2 * x**2 - 20 * x + 2\n\nX = np.linspace(-10, 10)\nY = polynom(X)\nplt.plot(X, Y);",
"_____no_output_____"
],
[
"# and everything works as usual\n\n# an animation to illustrate \n# translation by variable change\nfrom ipywidgets import interact, FloatSlider\n\ndef printit(offset):\n print(offset)\n \ninteract(lambda offset : print(offset), \n offset=FloatSlider(min=-100., max=50.,\n step=0.25));",
"_____no_output_____"
]
]
] | [
"code"
] | [
[
"code",
"code",
"code"
]
] |
d0c5837e9d57faa0b6cb68b7ad2fd7bfaa517f39 | 101,413 | ipynb | Jupyter Notebook | TestCountry0115.ipynb | ChangRitaaaa/Test | 3a211f5226a77e023ca9f718287f0ce8ad4ce8e6 | [
"MIT"
] | null | null | null | TestCountry0115.ipynb | ChangRitaaaa/Test | 3a211f5226a77e023ca9f718287f0ce8ad4ce8e6 | [
"MIT"
] | null | null | null | TestCountry0115.ipynb | ChangRitaaaa/Test | 3a211f5226a77e023ca9f718287f0ce8ad4ce8e6 | [
"MIT"
] | null | null | null | 153.889226 | 20,320 | 0.875765 | [
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\n\n# This is a custom matplotlib style that I use for most of my charts\n\ncountry_data = pd.read_csv('C:/Users/user/Desktop/test.csv')\ncountry_data",
"_____no_output_____"
]
],
[
[
"圖中顯示2016到2017入境各個國家人數\n多數國家在2017人數皆些微成長",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize=(15, 7))\nax1 = fig.add_subplot(111)\n\nfor (i, row) in country_data.iterrows():\n plt.bar([i - 0.2, i + 0.2], [row['2016'], row['2017']],\n color=['#CC6699', '#008AB8'], width=0.4, align='center', edgecolor='none')\n \nplt.xlim(-1, 13)\nplt.xticks(range(0, 13), country_data['country'], fontsize=11)\nplt.grid(False, axis='x')\nplt.yticks(np.arange(0, 5e6, 1e6),\n ['{}m'.format(int(x / 1e6)) if x > 0 else 0 for x in np.arange(0, 5e6, 1e6)])\nplt.xlabel('Country')\nplt.ylabel('Number of people (millions)')\n\nplt.savefig('pop_pyramid_grouped.pdf')\n;",
"_____no_output_____"
]
],
[
[
"DIFFERENCE\n各個國家在 2016到2017年間出入境的人數比較\n大部分國家為正成長,少數為負成長。",
"_____no_output_____"
]
],
[
[
"fig = plt.figure(figsize=(15, 7))\nax1 = fig.add_subplot(111)\nfor (i, row) in country_data.iterrows():\n plt.bar([i], [row['difference']],\n color=['#CC6699'], width=0.6, align='center', edgecolor='none')\n \nplt.xlim(-1, 13)\nplt.xticks(range(0, 13), country_data['country'], fontsize=11)\nplt.grid(False, axis='x')\nplt.yticks(np.arange(0, 4e5, 1e5),\n ['{}m'.format(int(x / 1e5)) if x > 0 else 0 for x in np.arange(0, 4e5, 1e5)])\nplt.xlabel('Country')\nplt.ylabel('Number of people (billions)')\n;",
"_____no_output_____"
],
[
"GDP\n國內生產總值是國民經濟核算的核心指標,在衡量一個國家或地區經濟狀況和發展水準有相當的重要性,\n此一數值亦包括移住勞工的薪資在內。\n可以看出在出入境較高的國家不論在2016或2017都有極高的GDP",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(15, 7))\nax1 = fig.add_subplot(111)\n\nfor (i, row) in country_data.iterrows():\n plt.bar([i - 0.2, i + 0.2], [row['2016GDP'], row['2017GDP']],\n color=['#CC6699', '#008AB8'], width=0.4, align='center', edgecolor='none')\n \nplt.xlim(-1, 13)\nplt.xticks(range(0, 13), country_data['country'], fontsize=11)\nplt.grid(False, axis='x')\nplt.yticks(np.arange(0, 13000000, 1000000),\n ['{}m'.format(int(x / 1000000)) if x > 0 else 0 for x in np.arange(0, 13000000, 1000000)])\nplt.xlabel('Country')\nplt.ylabel('Number of people (millions)')\n\nplt.savefig('pop_pyramid_grouped.pdf')\n;",
"_____no_output_____"
],
[
"m = country_data['2017']\nm",
"_____no_output_____"
],
[
"n = country_data['2017GDP']\nn",
"_____no_output_____"
]
],
[
[
"RELATION \nx軸為2017入境各個國家的人數\nY軸為2017該國家GDP總額\n當出入境人數增加時,該國當年GDP也隨之上升。\n推測在各國家觀光產業所佔生產毛額都有相當大的比例。\n可看出兩者有正相關關係。",
"_____no_output_____"
]
],
[
[
"plt.plot(m,n, 'bo')",
"_____no_output_____"
],
[
"x = country_data['growth']\nx",
"_____no_output_____"
],
[
"y = country_data['GDPgrowth']\ny",
"_____no_output_____"
]
],
[
[
"影響GDP成長因素有許多不同於原因\n拿日本和香港做比較\n從圖表中可見入境日本的人口數再2016到2017年間是成長的\n卻在這兩年間GDP些微下降\n相反的在兩年間入境香港的人數下降,GDP卻上升\n可得知雖然觀光經濟在國家GDP中佔有滿大的比重,但在某些國家中並非最為重要的部分。",
"_____no_output_____"
]
],
[
[
"plt.plot(x,y, 'bo')",
"_____no_output_____"
]
]
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
d0c59382295d4dbb4c5b8623c4d95c68f8f955d1 | 462,210 | ipynb | Jupyter Notebook | gan-mnist/MNIST_GAN_Exercise.ipynb | NGrech/deep-learning-v2-pytorch | 02c11938034429975d6e22bb4bedef1981f9a90f | [
"MIT"
] | null | null | null | gan-mnist/MNIST_GAN_Exercise.ipynb | NGrech/deep-learning-v2-pytorch | 02c11938034429975d6e22bb4bedef1981f9a90f | [
"MIT"
] | null | null | null | gan-mnist/MNIST_GAN_Exercise.ipynb | NGrech/deep-learning-v2-pytorch | 02c11938034429975d6e22bb4bedef1981f9a90f | [
"MIT"
] | null | null | null | 432.780899 | 198,728 | 0.923931 | [
[
[
"# Generative Adversarial Network\n\nIn this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!\n\nGANs were [first reported on](https://arxiv.org/abs/1406.2661) in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:\n\n* [Pix2Pix](https://affinelayer.com/pixsrv/) \n* [CycleGAN & Pix2Pix in PyTorch, Jun-Yan Zhu](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix)\n* [A list of generative models](https://github.com/wiseodd/generative-models)\n\nThe idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes \"fake\" data to pass to the discriminator. The discriminator also sees real training data and predicts if the data it's received is real or fake. \n> * The generator is trained to fool the discriminator, it wants to output data that looks _as close as possible_ to real, training data. \n* The discriminator is a classifier that is trained to figure out which data is real and which is fake. \n\nWhat ends up happening is that the generator learns to make data that is indistinguishable from real data to the discriminator.\n\n<img src='assets/gan_pipeline.png' width=70% />\n\nThe general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector that the generator uses to construct its fake images. This is often called a **latent vector** and that vector space is called **latent space**. As the generator trains, it figures out how to map latent vectors to recognizable images that can fool the discriminator.\n\nIf you're interested in generating only new images, you can throw out the discriminator after training. In this notebook, I'll show you how to define and train these adversarial networks in PyTorch and generate new images!",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\r\n%config Completer.use_jedi = True\r\n\r\nimport numpy as np\r\nimport torch\r\nimport matplotlib.pyplot as plt\r\n\r\nimport os\r\nos.environ[\"KMP_DUPLICATE_LIB_OK\"]=\"TRUE\"\r\n",
"_____no_output_____"
],
[
"from torchvision import datasets\r\nimport torchvision.transforms as transforms\r\n\r\n# number of subprocesses to use for data loading\r\nnum_workers = 0\r\n# how many samples per batch to load\r\nbatch_size = 64\r\n\r\n# convert data to torch.FloatTensor\r\ntransform = transforms.ToTensor()\r\n\r\n# get the training datasets\r\ntrain_data = datasets.MNIST(root='gan-mnist\\data', train=True,\r\n download=False, transform=transform)\r\n\r\n# prepare data loader\r\ntrain_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,\r\n num_workers=num_workers)",
"_____no_output_____"
]
],
[
[
"### Visualize the data",
"_____no_output_____"
]
],
[
[
"# obtain one batch of training images\ndataiter = iter(train_loader)\nimages, labels = dataiter.next()\nimages = images.numpy()\n\n# get one image from the batch\nimg = np.squeeze(images[0])\n\nfig = plt.figure(figsize = (3,3)) \nax = fig.add_subplot(111)\nax.imshow(img, cmap='gray')",
"_____no_output_____"
]
],
[
[
"---\n# Define the Model\n\nA GAN is comprised of two adversarial networks, a discriminator and a generator.",
"_____no_output_____"
],
[
"## Discriminator\n\nThe discriminator network is going to be a pretty typical linear classifier. To make this network a universal function approximator, we'll need at least one hidden layer, and these hidden layers should have one key attribute:\n> All hidden layers will have a [Leaky ReLu](https://pytorch.org/docs/stable/nn.html#torch.nn.LeakyReLU) activation function applied to their outputs.\n\n<img src='assets/gan_network.png' width=70% />\n\n#### Leaky ReLu\n\nWe should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.\n\n<img src='assets/leaky_relu.png' width=40% />\n\n#### Sigmoid Output\n\nWe'll also take the approach of using a more numerically stable loss function on the outputs. Recall that we want the discriminator to output a value 0-1 indicating whether an image is _real or fake_. \n> We will ultimately use [BCEWithLogitsLoss](https://pytorch.org/docs/stable/nn.html#bcewithlogitsloss), which combines a `sigmoid` activation function **and** binary cross entropy loss in one function. \n\nSo, our final output layer should not have any activation function applied to it.",
"_____no_output_____"
]
],
[
[
"import torch.nn as nn\r\nimport torch.nn.functional as F\r\n\r\nclass Discriminator(nn.Module):\r\n\r\n def __init__(self, input_size, hidden_dim, output_size):\r\n super(Discriminator, self).__init__()\r\n \r\n self.fc1 = nn.Linear(input_size, hidden_dim*4)\r\n self.fc2 = nn.Linear(hidden_dim*4, hidden_dim*2)\r\n self.fc3 = nn.Linear(hidden_dim*2, hidden_dim)\r\n self.fc4 = nn.Linear(hidden_dim, output_size)\r\n\r\n self.dropout = nn.Dropout(0.3)\r\n \r\n \r\n def forward(self, x):\r\n # flatten image\r\n x = x.view(-1, 28*28)\r\n # pass x through all layers\r\n x = F.leaky_relu(self.fc1(x), 0.2)\r\n x = self.dropout(x)\r\n x = F.leaky_relu(self.fc2(x), 0.2)\r\n x = self.dropout(x)\r\n x = F.leaky_relu(self.fc3(x), 0.2)\r\n x = self.dropout(x)\r\n\r\n return self.fc4(x)\r\n",
"_____no_output_____"
]
],
[
[
"## Generator\n\nThe generator network will be almost exactly the same as the discriminator network, except that we're applying a [tanh activation function](https://pytorch.org/docs/stable/nn.html#tanh) to our output layer.\n\n#### tanh Output\nThe generator has been found to perform the best with $tanh$ for the generator output, which scales the output to be between -1 and 1, instead of 0 and 1. \n\n<img src='assets/tanh_fn.png' width=40% />\n\nRecall that we also want these outputs to be comparable to the *real* input pixel values, which are read in as normalized values between 0 and 1. \n> So, we'll also have to **scale our real input images to have pixel values between -1 and 1** when we train the discriminator. \n\nI'll do this in the training loop, later on.",
"_____no_output_____"
]
],
[
[
"class Generator(nn.Module):\r\n\r\n def __init__(self, input_size, hidden_dim, output_size):\r\n super(Generator, self).__init__()\r\n \r\n # define all layers\r\n self.fc1 = nn.Linear(input_size, hidden_dim)\r\n self.fc2 = nn.Linear(hidden_dim, hidden_dim*2)\r\n self.fc3 = nn.Linear(hidden_dim*2, hidden_dim*4)\r\n self.fc4 = nn.Linear(hidden_dim*4, output_size)\r\n self.dropout = nn.Dropout(0.3)\r\n\r\n def forward(self, x):\r\n # pass x through all layers\r\n x = F.leaky_relu(self.fc1(x), 0.2)\r\n x = self.dropout(x)\r\n x = F.leaky_relu(self.fc2(x), 0.2)\r\n x = self.dropout(x)\r\n x = F.leaky_relu(self.fc3(x), 0.2)\r\n x = self.dropout(x)\r\n # final layer should have tanh applied\r\n x = F.tanh(self.fc4(x))\r\n return x",
"_____no_output_____"
]
],
[
[
"## Model hyperparameters",
"_____no_output_____"
]
],
[
[
"# Discriminator hyperparams\r\n\r\n# Size of input image to discriminator (28*28)\r\ninput_size = 28*28\r\n# Size of discriminator output (real or fake)\r\nd_output_size = 1\r\n# Size of *last* hidden layer in the discriminator\r\nd_hidden_size = 32\r\n\r\n# Generator hyperparams\r\n\r\n# Size of latent vector to give to generator\r\nz_size = 100\r\n# Size of discriminator output (generated image)\r\ng_output_size = 28*28 \r\n# Size of *first* hidden layer in the generator\r\ng_hidden_size = 32",
"_____no_output_____"
]
],
[
[
"## Build complete network\n\nNow we're instantiating the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.",
"_____no_output_____"
]
],
[
[
"# instantiate discriminator and generator\nD = Discriminator(input_size, d_hidden_size, d_output_size)\nG = Generator(z_size, g_hidden_size, g_output_size)\n\n# check that they are as you expect\nprint(D)\nprint()\nprint(G)",
"Discriminator(\n (fc1): Linear(in_features=784, out_features=128, bias=True)\n (fc2): Linear(in_features=128, out_features=64, bias=True)\n (fc3): Linear(in_features=64, out_features=32, bias=True)\n (fc4): Linear(in_features=32, out_features=1, bias=True)\n (dropout): Dropout(p=0.3, inplace=False)\n)\n\nGenerator(\n (fc1): Linear(in_features=100, out_features=32, bias=True)\n (fc2): Linear(in_features=32, out_features=64, bias=True)\n (fc3): Linear(in_features=64, out_features=128, bias=True)\n (fc4): Linear(in_features=128, out_features=784, bias=True)\n (dropout): Dropout(p=0.3, inplace=False)\n)\n"
]
],
[
[
"---\n## Discriminator and Generator Losses\n\nNow we need to calculate the losses. \n\n### Discriminator Losses\n\n> * For the discriminator, the total loss is the sum of the losses for real and fake images, `d_loss = d_real_loss + d_fake_loss`. \n* Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.\n\n<img src='assets/gan_pipeline.png' width=70% />\n\nThe losses will by binary cross entropy loss with logits, which we can get with [BCEWithLogitsLoss](https://pytorch.org/docs/stable/nn.html#bcewithlogitsloss). This combines a `sigmoid` activation function **and** and binary cross entropy loss in one function.\n\nFor the real images, we want `D(real_images) = 1`. That is, we want the discriminator to classify the the real images with a label = 1, indicating that these are real. To help the discriminator generalize better, the labels are **reduced a bit from 1.0 to 0.9**. For this, we'll use the parameter `smooth`; if True, then we should smooth our labels. In PyTorch, this looks like `labels = torch.ones(size) * 0.9`\n\nThe discriminator loss for the fake data is similar. We want `D(fake_images) = 0`, where the fake images are the _generator output_, `fake_images = G(z)`. \n\n### Generator Loss\n\nThe generator loss will look similar only with flipped labels. The generator's goal is to get `D(fake_images) = 1`. In this case, the labels are **flipped** to represent that the generator is trying to fool the discriminator into thinking that the images it generates (fakes) are real!",
"_____no_output_____"
]
],
[
[
"# Calculate losses\r\ndef real_loss(D_out, smooth=False):\r\n # compare logits to real labels\r\n # smooth labels if smooth=True\r\n\r\n labels = torch.ones(D_out.size(0))*0.9 if smooth else torch.ones(D_out.size(0))\r\n\r\n criterion = nn.BCEWithLogitsLoss()\r\n\r\n loss = criterion(D_out.squeeze(), labels) \r\n return loss\r\n\r\ndef fake_loss(D_out):\r\n # compare logits to fake labels\r\n labels = torch.zeros(D_out.size(0))\r\n criterion = nn.BCEWithLogitsLoss()\r\n loss = criterion(D_out.squeeze(), labels)\r\n return loss",
"_____no_output_____"
]
],
[
[
"## Optimizers\n\nWe want to update the generator and discriminator variables separately. So, we'll define two separate Adam optimizers.",
"_____no_output_____"
]
],
[
[
"import torch.optim as optim\r\n\r\n# learning rate for optimizers\r\nlr = 0.002\r\n\r\n# Create optimizers for the discriminator and generator\r\nd_optimizer = optim.Adam(D.parameters(), lr)\r\ng_optimizer = optim.Adam(G.parameters(), lr)",
"_____no_output_____"
]
],
[
[
"---\n## Training\n\nTraining will involve alternating between training the discriminator and the generator. We'll use our functions `real_loss` and `fake_loss` to help us calculate the discriminator losses in all of the following cases.\n\n### Discriminator training\n1. Compute the discriminator loss on real, training images \n2. Generate fake images\n3. Compute the discriminator loss on fake, generated images \n4. Add up real and fake loss\n5. Perform backpropagation + an optimization step to update the discriminator's weights\n\n### Generator training\n1. Generate fake images\n2. Compute the discriminator loss on fake images, using **flipped** labels!\n3. Perform backpropagation + an optimization step to update the generator's weights\n\n#### Saving Samples\n\nAs we train, we'll also print out some loss statistics and save some generated \"fake\" samples.",
"_____no_output_____"
]
],
[
[
"import pickle as pkl\r\n\r\n# training hyperparams\r\nnum_epochs = 100\r\n\r\n# keep track of loss and generated, \"fake\" samples\r\nsamples = []\r\nlosses = []\r\n\r\nprint_every = 400\r\n\r\n# Get some fixed data for sampling. These are images that are held\r\n# constant throughout training, and allow us to inspect the model's performance\r\nsample_size=16\r\nfixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))\r\nfixed_z = torch.from_numpy(fixed_z).float()\r\n\r\n# train the network\r\nD.train()\r\nG.train()\r\nfor epoch in range(num_epochs):\r\n \r\n for batch_i, (real_images, _) in enumerate(train_loader):\r\n \r\n batch_size = real_images.size(0)\r\n \r\n ## Important rescaling step ## \r\n real_images = real_images*2 - 1 # rescale input images from [0,1) to [-1, 1)\r\n \r\n # ============================================\r\n # TRAIN THE DISCRIMINATOR\r\n # ============================================\r\n\r\n d_optimizer.zero_grad()\r\n\r\n # 1. Train with real images\r\n\r\n # Compute the discriminator losses on real images\r\n # use smoothed labels\r\n d_real = D(real_images)\r\n d_real_loss = real_loss(d_real, True)\r\n\r\n # 2. Train with fake images\r\n\r\n # Generate fake images\r\n z = np.random.uniform(-1, 1, size=(batch_size, z_size))\r\n z = torch.from_numpy(z).float()\r\n fake_images = G(z)\r\n \r\n # Compute the discriminator losses on fake images \r\n d_fake = D(fake_images)\r\n d_fake_loss = fake_loss(d_fake)\r\n\r\n # add up real and fake losses and perform backprop\r\n d_loss = d_fake_loss + d_real_loss\r\n d_loss.backward()\r\n d_optimizer.step()\r\n \r\n # =========================================\r\n # TRAIN THE GENERATOR\r\n # =========================================\r\n \r\n # 1. Train with fake images and flipped labels\r\n g_optimizer.zero_grad()\r\n\r\n # Generate fake images\r\n z = np.random.uniform(-1, 1, size=(batch_size, z_size))\r\n z = torch.from_numpy(z).float()\r\n fake_images = G(z)\r\n\r\n # Compute the discriminator losses on fake images \r\n # using flipped labels!\r\n d_fake = D(fake_images)\r\n g_loss = real_loss(d_fake)\r\n\r\n # perform backprop\r\n g_loss.backward()\r\n g_optimizer.step() \r\n \r\n\r\n # Print some loss stats\r\n if batch_i % print_every == 0:\r\n # print discriminator and generator loss\r\n print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(\r\n epoch+1, num_epochs, d_loss.item(), g_loss.item()))\r\n\r\n \r\n ## AFTER EACH EPOCH##\r\n # append discriminator loss and generator loss\r\n losses.append((d_loss.item(), g_loss.item()))\r\n \r\n # generate and save sample, fake images\r\n G.eval() # eval mode for generating samples\r\n samples_z = G(fixed_z)\r\n samples.append(samples_z)\r\n G.train() # back to train mode\r\n\r\n\r\n# Save training generator samples\r\nwith open('train_samples.pkl', 'wb') as f:\r\n pkl.dump(samples, f)",
"Epoch [ 1/ 100] | d_loss: 1.3531 | g_loss: 0.7711\nEpoch [ 1/ 100] | d_loss: 1.2302 | g_loss: 1.2285\nEpoch [ 1/ 100] | d_loss: 1.4087 | g_loss: 1.0569\nEpoch [ 2/ 100] | d_loss: 1.3109 | g_loss: 1.0101\nEpoch [ 2/ 100] | d_loss: 1.2816 | g_loss: 1.0060\nEpoch [ 2/ 100] | d_loss: 1.3010 | g_loss: 1.0656\nEpoch [ 3/ 100] | d_loss: 1.2915 | g_loss: 0.9366\nEpoch [ 3/ 100] | d_loss: 1.2509 | g_loss: 1.1355\nEpoch [ 3/ 100] | d_loss: 1.3730 | g_loss: 1.0632\nEpoch [ 4/ 100] | d_loss: 1.3512 | g_loss: 0.9664\nEpoch [ 4/ 100] | d_loss: 1.2943 | g_loss: 0.9878\nEpoch [ 4/ 100] | d_loss: 1.4087 | g_loss: 1.0644\nEpoch [ 5/ 100] | d_loss: 1.2353 | g_loss: 1.3839\nEpoch [ 5/ 100] | d_loss: 1.1897 | g_loss: 0.9578\nEpoch [ 5/ 100] | d_loss: 1.2582 | g_loss: 1.2406\nEpoch [ 6/ 100] | d_loss: 1.2821 | g_loss: 1.1619\nEpoch [ 6/ 100] | d_loss: 1.2806 | g_loss: 1.1365\nEpoch [ 6/ 100] | d_loss: 1.4154 | g_loss: 1.1358\nEpoch [ 7/ 100] | d_loss: 1.2333 | g_loss: 1.6435\nEpoch [ 7/ 100] | d_loss: 1.2123 | g_loss: 1.0421\nEpoch [ 7/ 100] | d_loss: 1.3686 | g_loss: 0.8658\nEpoch [ 8/ 100] | d_loss: 1.3349 | g_loss: 0.9474\nEpoch [ 8/ 100] | d_loss: 1.3414 | g_loss: 0.9123\nEpoch [ 8/ 100] | d_loss: 1.3940 | g_loss: 1.0812\nEpoch [ 9/ 100] | d_loss: 1.3077 | g_loss: 0.8945\nEpoch [ 9/ 100] | d_loss: 1.2999 | g_loss: 1.0142\nEpoch [ 9/ 100] | d_loss: 1.2527 | g_loss: 1.6874\nEpoch [ 10/ 100] | d_loss: 1.3648 | g_loss: 1.5261\nEpoch [ 10/ 100] | d_loss: 1.2058 | g_loss: 1.1350\nEpoch [ 10/ 100] | d_loss: 1.2218 | g_loss: 1.1011\nEpoch [ 11/ 100] | d_loss: 1.3088 | g_loss: 0.9353\nEpoch [ 11/ 100] | d_loss: 1.2966 | g_loss: 0.9938\nEpoch [ 11/ 100] | d_loss: 1.3508 | g_loss: 0.8854\nEpoch [ 12/ 100] | d_loss: 1.3110 | g_loss: 1.1372\nEpoch [ 12/ 100] | d_loss: 1.3338 | g_loss: 1.0894\nEpoch [ 12/ 100] | d_loss: 1.3748 | g_loss: 0.9612\nEpoch [ 13/ 100] | d_loss: 1.2627 | g_loss: 0.8989\nEpoch [ 13/ 100] | d_loss: 1.2437 | g_loss: 0.9147\nEpoch [ 13/ 100] | d_loss: 1.3180 | g_loss: 0.9359\nEpoch [ 14/ 100] | d_loss: 1.2543 | g_loss: 1.0714\nEpoch [ 14/ 100] | d_loss: 1.2334 | g_loss: 1.1931\nEpoch [ 14/ 100] | d_loss: 1.4255 | g_loss: 1.3632\nEpoch [ 15/ 100] | d_loss: 1.3010 | g_loss: 0.8029\nEpoch [ 15/ 100] | d_loss: 1.3200 | g_loss: 1.0237\nEpoch [ 15/ 100] | d_loss: 1.3550 | g_loss: 1.2831\nEpoch [ 16/ 100] | d_loss: 1.2534 | g_loss: 1.0717\nEpoch [ 16/ 100] | d_loss: 1.2887 | g_loss: 0.9360\nEpoch [ 16/ 100] | d_loss: 1.3533 | g_loss: 1.0733\nEpoch [ 17/ 100] | d_loss: 1.3708 | g_loss: 1.8827\nEpoch [ 17/ 100] | d_loss: 1.2202 | g_loss: 1.3623\nEpoch [ 17/ 100] | d_loss: 1.3645 | g_loss: 0.8567\nEpoch [ 18/ 100] | d_loss: 1.2624 | g_loss: 0.8940\nEpoch [ 18/ 100] | d_loss: 1.2971 | g_loss: 1.0213\nEpoch [ 18/ 100] | d_loss: 1.2807 | g_loss: 1.1927\nEpoch [ 19/ 100] | d_loss: 1.3378 | g_loss: 1.0828\nEpoch [ 19/ 100] | d_loss: 1.3348 | g_loss: 1.0060\nEpoch [ 19/ 100] | d_loss: 1.3870 | g_loss: 0.9708\nEpoch [ 20/ 100] | d_loss: 1.4101 | g_loss: 0.8896\nEpoch [ 20/ 100] | d_loss: 1.3094 | g_loss: 0.7296\nEpoch [ 20/ 100] | d_loss: 1.2519 | g_loss: 1.0081\nEpoch [ 21/ 100] | d_loss: 1.1927 | g_loss: 1.1677\nEpoch [ 21/ 100] | d_loss: 1.1815 | g_loss: 1.4335\nEpoch [ 21/ 100] | d_loss: 1.2899 | g_loss: 1.0089\nEpoch [ 22/ 100] | d_loss: 1.2312 | g_loss: 0.8798\nEpoch [ 22/ 100] | d_loss: 1.3137 | g_loss: 0.9090\nEpoch [ 22/ 100] | d_loss: 1.3854 | g_loss: 1.0089\nEpoch [ 23/ 100] | d_loss: 1.2906 | g_loss: 1.2212\nEpoch [ 23/ 100] | d_loss: 1.3960 | g_loss: 0.9121\nEpoch [ 23/ 100] | d_loss: 1.3336 | g_loss: 0.8902\nEpoch [ 24/ 100] | d_loss: 1.3888 | g_loss: 0.8801\nEpoch [ 24/ 100] | d_loss: 1.3206 | g_loss: 1.0313\nEpoch [ 24/ 100] | d_loss: 1.1475 | g_loss: 2.0411\nEpoch [ 25/ 100] | d_loss: 1.3560 | g_loss: 1.2061\nEpoch [ 25/ 100] | d_loss: 1.3523 | g_loss: 0.8845\nEpoch [ 25/ 100] | d_loss: 1.2371 | g_loss: 1.3663\nEpoch [ 26/ 100] | d_loss: 1.3033 | g_loss: 1.0123\nEpoch [ 26/ 100] | d_loss: 1.1944 | g_loss: 0.9201\nEpoch [ 26/ 100] | d_loss: 1.2866 | g_loss: 0.9932\nEpoch [ 27/ 100] | d_loss: 1.1896 | g_loss: 1.1832\nEpoch [ 27/ 100] | d_loss: 1.3499 | g_loss: 1.0889\nEpoch [ 27/ 100] | d_loss: 1.3422 | g_loss: 1.0348\nEpoch [ 28/ 100] | d_loss: 1.2485 | g_loss: 1.1148\nEpoch [ 28/ 100] | d_loss: 1.2642 | g_loss: 1.2088\nEpoch [ 28/ 100] | d_loss: 1.4397 | g_loss: 0.9339\nEpoch [ 29/ 100] | d_loss: 1.2895 | g_loss: 0.7498\nEpoch [ 29/ 100] | d_loss: 1.1506 | g_loss: 1.1295\nEpoch [ 29/ 100] | d_loss: 1.4491 | g_loss: 1.3574\nEpoch [ 30/ 100] | d_loss: 1.2559 | g_loss: 1.5777\nEpoch [ 30/ 100] | d_loss: 1.2815 | g_loss: 0.9625\nEpoch [ 30/ 100] | d_loss: 1.3115 | g_loss: 1.2686\nEpoch [ 31/ 100] | d_loss: 1.2588 | g_loss: 1.1423\nEpoch [ 31/ 100] | d_loss: 1.2136 | g_loss: 1.2242\nEpoch [ 31/ 100] | d_loss: 1.3801 | g_loss: 0.9923\nEpoch [ 32/ 100] | d_loss: 1.3285 | g_loss: 1.0367\nEpoch [ 32/ 100] | d_loss: 1.2289 | g_loss: 1.1321\nEpoch [ 32/ 100] | d_loss: 1.3601 | g_loss: 1.0313\nEpoch [ 33/ 100] | d_loss: 1.1948 | g_loss: 1.0068\nEpoch [ 33/ 100] | d_loss: 1.2937 | g_loss: 1.0073\nEpoch [ 33/ 100] | d_loss: 1.2989 | g_loss: 0.9868\nEpoch [ 34/ 100] | d_loss: 1.4460 | g_loss: 0.9134\nEpoch [ 34/ 100] | d_loss: 1.2148 | g_loss: 1.1956\nEpoch [ 34/ 100] | d_loss: 1.3533 | g_loss: 0.8891\nEpoch [ 35/ 100] | d_loss: 1.2097 | g_loss: 1.4650\nEpoch [ 35/ 100] | d_loss: 1.2279 | g_loss: 1.2668\nEpoch [ 35/ 100] | d_loss: 1.4249 | g_loss: 0.9469\nEpoch [ 36/ 100] | d_loss: 1.2485 | g_loss: 1.1262\nEpoch [ 36/ 100] | d_loss: 1.2594 | g_loss: 0.9882\nEpoch [ 36/ 100] | d_loss: 1.3844 | g_loss: 1.1260\nEpoch [ 37/ 100] | d_loss: 1.2595 | g_loss: 0.8513\nEpoch [ 37/ 100] | d_loss: 1.2090 | g_loss: 1.1805\nEpoch [ 37/ 100] | d_loss: 1.2444 | g_loss: 1.2197\nEpoch [ 38/ 100] | d_loss: 1.2216 | g_loss: 0.8636\nEpoch [ 38/ 100] | d_loss: 1.2018 | g_loss: 1.3839\nEpoch [ 38/ 100] | d_loss: 1.3141 | g_loss: 0.9494\nEpoch [ 39/ 100] | d_loss: 1.1622 | g_loss: 1.5581\nEpoch [ 39/ 100] | d_loss: 1.2891 | g_loss: 1.0469\nEpoch [ 39/ 100] | d_loss: 1.4488 | g_loss: 0.8762\nEpoch [ 40/ 100] | d_loss: 1.3004 | g_loss: 1.3421\nEpoch [ 40/ 100] | d_loss: 1.3234 | g_loss: 1.0893\nEpoch [ 40/ 100] | d_loss: 1.3323 | g_loss: 1.2440\nEpoch [ 41/ 100] | d_loss: 1.2183 | g_loss: 1.0090\nEpoch [ 41/ 100] | d_loss: 1.1713 | g_loss: 1.0806\nEpoch [ 41/ 100] | d_loss: 1.3810 | g_loss: 0.9084\nEpoch [ 42/ 100] | d_loss: 1.2401 | g_loss: 0.9914\nEpoch [ 42/ 100] | d_loss: 1.1330 | g_loss: 1.2833\nEpoch [ 42/ 100] | d_loss: 1.2439 | g_loss: 1.2622\nEpoch [ 43/ 100] | d_loss: 1.2373 | g_loss: 1.1852\nEpoch [ 43/ 100] | d_loss: 1.3817 | g_loss: 1.3201\nEpoch [ 43/ 100] | d_loss: 1.3082 | g_loss: 1.0833\nEpoch [ 44/ 100] | d_loss: 1.2528 | g_loss: 0.9459\nEpoch [ 44/ 100] | d_loss: 1.1986 | g_loss: 1.4606\nEpoch [ 44/ 100] | d_loss: 1.3660 | g_loss: 1.0613\nEpoch [ 45/ 100] | d_loss: 1.2140 | g_loss: 0.8630\nEpoch [ 45/ 100] | d_loss: 1.3629 | g_loss: 0.9297\nEpoch [ 45/ 100] | d_loss: 1.4048 | g_loss: 1.2440\nEpoch [ 46/ 100] | d_loss: 1.1463 | g_loss: 1.3287\nEpoch [ 46/ 100] | d_loss: 1.2906 | g_loss: 1.2744\nEpoch [ 46/ 100] | d_loss: 1.3228 | g_loss: 0.9430\nEpoch [ 47/ 100] | d_loss: 1.2826 | g_loss: 0.9339\nEpoch [ 47/ 100] | d_loss: 1.2783 | g_loss: 1.1001\nEpoch [ 47/ 100] | d_loss: 1.4375 | g_loss: 0.8785\nEpoch [ 48/ 100] | d_loss: 1.3932 | g_loss: 1.0877\nEpoch [ 48/ 100] | d_loss: 1.2612 | g_loss: 0.9551\nEpoch [ 48/ 100] | d_loss: 1.4495 | g_loss: 1.0104\nEpoch [ 49/ 100] | d_loss: 1.2554 | g_loss: 1.2202\nEpoch [ 49/ 100] | d_loss: 1.2347 | g_loss: 1.2533\nEpoch [ 49/ 100] | d_loss: 1.2530 | g_loss: 1.2297\nEpoch [ 50/ 100] | d_loss: 1.3301 | g_loss: 0.9604\nEpoch [ 50/ 100] | d_loss: 1.2942 | g_loss: 0.9739\nEpoch [ 50/ 100] | d_loss: 1.3025 | g_loss: 0.9324\nEpoch [ 51/ 100] | d_loss: 1.2742 | g_loss: 0.8984\nEpoch [ 51/ 100] | d_loss: 1.1813 | g_loss: 1.1831\nEpoch [ 51/ 100] | d_loss: 1.5283 | g_loss: 1.0025\nEpoch [ 52/ 100] | d_loss: 1.2598 | g_loss: 0.9019\nEpoch [ 52/ 100] | d_loss: 1.2036 | g_loss: 1.1081\nEpoch [ 52/ 100] | d_loss: 1.3123 | g_loss: 1.1876\nEpoch [ 53/ 100] | d_loss: 1.2999 | g_loss: 1.3040\nEpoch [ 53/ 100] | d_loss: 1.2278 | g_loss: 1.0079\nEpoch [ 53/ 100] | d_loss: 1.3751 | g_loss: 0.9916\nEpoch [ 54/ 100] | d_loss: 1.3614 | g_loss: 1.1176\nEpoch [ 54/ 100] | d_loss: 1.1290 | g_loss: 0.9705\nEpoch [ 54/ 100] | d_loss: 1.3878 | g_loss: 0.9611\nEpoch [ 55/ 100] | d_loss: 1.3186 | g_loss: 1.1652\nEpoch [ 55/ 100] | d_loss: 1.2770 | g_loss: 0.9314\nEpoch [ 55/ 100] | d_loss: 1.2727 | g_loss: 1.3335\nEpoch [ 56/ 100] | d_loss: 1.3186 | g_loss: 1.1503\nEpoch [ 56/ 100] | d_loss: 1.2549 | g_loss: 1.0088\nEpoch [ 56/ 100] | d_loss: 1.4016 | g_loss: 0.9255\nEpoch [ 57/ 100] | d_loss: 1.3610 | g_loss: 0.9987\nEpoch [ 57/ 100] | d_loss: 1.2572 | g_loss: 1.1284\nEpoch [ 57/ 100] | d_loss: 1.4140 | g_loss: 1.2915\nEpoch [ 58/ 100] | d_loss: 1.2236 | g_loss: 1.0272\nEpoch [ 58/ 100] | d_loss: 1.2063 | g_loss: 1.1505\nEpoch [ 58/ 100] | d_loss: 1.3042 | g_loss: 0.9073\nEpoch [ 59/ 100] | d_loss: 1.3560 | g_loss: 0.9239\nEpoch [ 59/ 100] | d_loss: 1.2773 | g_loss: 0.8953\nEpoch [ 59/ 100] | d_loss: 1.2887 | g_loss: 0.9548\nEpoch [ 60/ 100] | d_loss: 1.3979 | g_loss: 1.2045\nEpoch [ 60/ 100] | d_loss: 1.2855 | g_loss: 1.0597\nEpoch [ 60/ 100] | d_loss: 1.4218 | g_loss: 1.0329\nEpoch [ 61/ 100] | d_loss: 1.1825 | g_loss: 1.2527\nEpoch [ 61/ 100] | d_loss: 1.2309 | g_loss: 1.3306\nEpoch [ 61/ 100] | d_loss: 1.3371 | g_loss: 1.0675\nEpoch [ 62/ 100] | d_loss: 1.3084 | g_loss: 1.0204\nEpoch [ 62/ 100] | d_loss: 1.1800 | g_loss: 1.0729\nEpoch [ 62/ 100] | d_loss: 1.2923 | g_loss: 1.0921\nEpoch [ 63/ 100] | d_loss: 1.2504 | g_loss: 1.2717\nEpoch [ 63/ 100] | d_loss: 1.1416 | g_loss: 0.9719\nEpoch [ 63/ 100] | d_loss: 1.3151 | g_loss: 0.9817\nEpoch [ 64/ 100] | d_loss: 1.2600 | g_loss: 1.1755\nEpoch [ 64/ 100] | d_loss: 1.4005 | g_loss: 1.0600\nEpoch [ 64/ 100] | d_loss: 1.4007 | g_loss: 1.1681\nEpoch [ 65/ 100] | d_loss: 1.2142 | g_loss: 1.0192\nEpoch [ 65/ 100] | d_loss: 1.2148 | g_loss: 1.1135\nEpoch [ 65/ 100] | d_loss: 1.3737 | g_loss: 1.0343\nEpoch [ 66/ 100] | d_loss: 1.2231 | g_loss: 1.0220\nEpoch [ 66/ 100] | d_loss: 1.2848 | g_loss: 1.0881\nEpoch [ 66/ 100] | d_loss: 1.2471 | g_loss: 1.4344\nEpoch [ 67/ 100] | d_loss: 1.1928 | g_loss: 1.0873\nEpoch [ 67/ 100] | d_loss: 1.1354 | g_loss: 1.2968\nEpoch [ 67/ 100] | d_loss: 1.2804 | g_loss: 1.0571\nEpoch [ 68/ 100] | d_loss: 1.3919 | g_loss: 0.9939\nEpoch [ 68/ 100] | d_loss: 1.1789 | g_loss: 1.0430\nEpoch [ 68/ 100] | d_loss: 1.3973 | g_loss: 0.9902\nEpoch [ 69/ 100] | d_loss: 1.2891 | g_loss: 1.0297\nEpoch [ 69/ 100] | d_loss: 1.1998 | g_loss: 0.9157\nEpoch [ 69/ 100] | d_loss: 1.4071 | g_loss: 1.2276\nEpoch [ 70/ 100] | d_loss: 1.2877 | g_loss: 1.0469\nEpoch [ 70/ 100] | d_loss: 1.3072 | g_loss: 1.2479\nEpoch [ 70/ 100] | d_loss: 1.3134 | g_loss: 0.9432\nEpoch [ 71/ 100] | d_loss: 1.2624 | g_loss: 1.1425\nEpoch [ 71/ 100] | d_loss: 1.2338 | g_loss: 1.2214\nEpoch [ 71/ 100] | d_loss: 1.2826 | g_loss: 0.9315\nEpoch [ 72/ 100] | d_loss: 1.3600 | g_loss: 1.1025\nEpoch [ 72/ 100] | d_loss: 1.1921 | g_loss: 1.0479\nEpoch [ 72/ 100] | d_loss: 1.3151 | g_loss: 1.0490\nEpoch [ 73/ 100] | d_loss: 1.2933 | g_loss: 1.0347\nEpoch [ 73/ 100] | d_loss: 1.1705 | g_loss: 1.1612\nEpoch [ 73/ 100] | d_loss: 1.2874 | g_loss: 1.2029\nEpoch [ 74/ 100] | d_loss: 1.2517 | g_loss: 1.8946\nEpoch [ 74/ 100] | d_loss: 1.2032 | g_loss: 1.0439\nEpoch [ 74/ 100] | d_loss: 1.2673 | g_loss: 1.1065\nEpoch [ 75/ 100] | d_loss: 1.2299 | g_loss: 0.9581\nEpoch [ 75/ 100] | d_loss: 1.2312 | g_loss: 1.1951\nEpoch [ 75/ 100] | d_loss: 1.2460 | g_loss: 1.1288\nEpoch [ 76/ 100] | d_loss: 1.3518 | g_loss: 0.9225\nEpoch [ 76/ 100] | d_loss: 1.1811 | g_loss: 1.0097\nEpoch [ 76/ 100] | d_loss: 1.2988 | g_loss: 1.0762\nEpoch [ 77/ 100] | d_loss: 1.3217 | g_loss: 0.7855\nEpoch [ 77/ 100] | d_loss: 1.2007 | g_loss: 1.1404\nEpoch [ 77/ 100] | d_loss: 1.3239 | g_loss: 1.1003\nEpoch [ 78/ 100] | d_loss: 1.2543 | g_loss: 0.9330\nEpoch [ 78/ 100] | d_loss: 1.2206 | g_loss: 1.0680\nEpoch [ 78/ 100] | d_loss: 1.2738 | g_loss: 1.2152\nEpoch [ 79/ 100] | d_loss: 1.4162 | g_loss: 1.1707\nEpoch [ 79/ 100] | d_loss: 1.2837 | g_loss: 1.3022\nEpoch [ 79/ 100] | d_loss: 1.3001 | g_loss: 1.0318\nEpoch [ 80/ 100] | d_loss: 1.3078 | g_loss: 1.7396\nEpoch [ 80/ 100] | d_loss: 1.1743 | g_loss: 0.9199\nEpoch [ 80/ 100] | d_loss: 1.3819 | g_loss: 1.0979\nEpoch [ 81/ 100] | d_loss: 1.1138 | g_loss: 1.3053\nEpoch [ 81/ 100] | d_loss: 1.3514 | g_loss: 1.0994\nEpoch [ 81/ 100] | d_loss: 1.3810 | g_loss: 1.0433\nEpoch [ 82/ 100] | d_loss: 1.2704 | g_loss: 0.8787\nEpoch [ 82/ 100] | d_loss: 1.2360 | g_loss: 1.4693\nEpoch [ 82/ 100] | d_loss: 1.3002 | g_loss: 1.0383\nEpoch [ 83/ 100] | d_loss: 1.2187 | g_loss: 0.9998\nEpoch [ 83/ 100] | d_loss: 1.2148 | g_loss: 1.3135\nEpoch [ 83/ 100] | d_loss: 1.3308 | g_loss: 1.0198\nEpoch [ 84/ 100] | d_loss: 1.2388 | g_loss: 1.4165\nEpoch [ 84/ 100] | d_loss: 1.1828 | g_loss: 1.2465\nEpoch [ 84/ 100] | d_loss: 1.3094 | g_loss: 1.1794\nEpoch [ 85/ 100] | d_loss: 1.3170 | g_loss: 1.2985\nEpoch [ 85/ 100] | d_loss: 1.3308 | g_loss: 1.1359\nEpoch [ 85/ 100] | d_loss: 1.3534 | g_loss: 0.9762\nEpoch [ 86/ 100] | d_loss: 1.3212 | g_loss: 1.2828\nEpoch [ 86/ 100] | d_loss: 1.2369 | g_loss: 1.1310\nEpoch [ 86/ 100] | d_loss: 1.2853 | g_loss: 0.9044\nEpoch [ 87/ 100] | d_loss: 1.2077 | g_loss: 0.9970\nEpoch [ 87/ 100] | d_loss: 1.2552 | g_loss: 1.1322\nEpoch [ 87/ 100] | d_loss: 1.3196 | g_loss: 1.0316\nEpoch [ 88/ 100] | d_loss: 1.2572 | g_loss: 1.2576\nEpoch [ 88/ 100] | d_loss: 1.2096 | g_loss: 0.9951\nEpoch [ 88/ 100] | d_loss: 1.3247 | g_loss: 1.0205\nEpoch [ 89/ 100] | d_loss: 1.2757 | g_loss: 0.9470\nEpoch [ 89/ 100] | d_loss: 1.3420 | g_loss: 0.9204\nEpoch [ 89/ 100] | d_loss: 1.3458 | g_loss: 0.9850\nEpoch [ 90/ 100] | d_loss: 1.2746 | g_loss: 1.0729\nEpoch [ 90/ 100] | d_loss: 1.2336 | g_loss: 1.2089\nEpoch [ 90/ 100] | d_loss: 1.2442 | g_loss: 0.9017\nEpoch [ 91/ 100] | d_loss: 1.1981 | g_loss: 1.0634\nEpoch [ 91/ 100] | d_loss: 1.2220 | g_loss: 1.2552\nEpoch [ 91/ 100] | d_loss: 1.2923 | g_loss: 1.0948\nEpoch [ 92/ 100] | d_loss: 1.2367 | g_loss: 1.1245\nEpoch [ 92/ 100] | d_loss: 1.2452 | g_loss: 0.9460\nEpoch [ 92/ 100] | d_loss: 1.3216 | g_loss: 1.0520\nEpoch [ 93/ 100] | d_loss: 1.2017 | g_loss: 1.3405\nEpoch [ 93/ 100] | d_loss: 1.2001 | g_loss: 1.0365\nEpoch [ 93/ 100] | d_loss: 1.3458 | g_loss: 1.0823\nEpoch [ 94/ 100] | d_loss: 1.3786 | g_loss: 1.0941\nEpoch [ 94/ 100] | d_loss: 1.1826 | g_loss: 0.9860\nEpoch [ 94/ 100] | d_loss: 1.3293 | g_loss: 1.0243\nEpoch [ 95/ 100] | d_loss: 1.3463 | g_loss: 1.2850\nEpoch [ 95/ 100] | d_loss: 1.1278 | g_loss: 1.0727\nEpoch [ 95/ 100] | d_loss: 1.2627 | g_loss: 1.2124\nEpoch [ 96/ 100] | d_loss: 1.3048 | g_loss: 1.1562\nEpoch [ 96/ 100] | d_loss: 1.2287 | g_loss: 1.0615\nEpoch [ 96/ 100] | d_loss: 1.3680 | g_loss: 1.0653\nEpoch [ 97/ 100] | d_loss: 1.2905 | g_loss: 1.2709\nEpoch [ 97/ 100] | d_loss: 1.2199 | g_loss: 1.0053\nEpoch [ 97/ 100] | d_loss: 1.3795 | g_loss: 1.1191\nEpoch [ 98/ 100] | d_loss: 1.2047 | g_loss: 1.1317\nEpoch [ 98/ 100] | d_loss: 1.2463 | g_loss: 0.9844\nEpoch [ 98/ 100] | d_loss: 1.3600 | g_loss: 0.9808\nEpoch [ 99/ 100] | d_loss: 1.2714 | g_loss: 0.9561\nEpoch [ 99/ 100] | d_loss: 1.2632 | g_loss: 1.0929\nEpoch [ 99/ 100] | d_loss: 1.3607 | g_loss: 0.9113\nEpoch [ 100/ 100] | d_loss: 1.2798 | g_loss: 1.1682\nEpoch [ 100/ 100] | d_loss: 1.2647 | g_loss: 1.1466\nEpoch [ 100/ 100] | d_loss: 1.3322 | g_loss: 1.0236\n"
]
],
[
[
"## Training loss\n\nHere we'll plot the training losses for the generator and discriminator, recorded after each epoch.",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator')\nplt.plot(losses.T[1], label='Generator')\nplt.title(\"Training Losses\")\nplt.legend()",
"_____no_output_____"
]
],
[
[
"## Generator samples from training\n\nHere we can view samples of images from the generator. First we'll look at the images we saved during training.",
"_____no_output_____"
]
],
[
[
"# helper function for viewing a list of passed in sample images\ndef view_samples(epoch, samples):\n fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n img = img.detach()\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\n im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')",
"_____no_output_____"
],
[
"# Load samples from generator, taken while training\nwith open('train_samples.pkl', 'rb') as f:\n samples = pkl.load(f)",
"_____no_output_____"
]
],
[
[
"These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.",
"_____no_output_____"
]
],
[
[
"# -1 indicates final epoch's samples (the last in the list)\nview_samples(-1, samples)",
"_____no_output_____"
]
],
[
[
"Below I'm showing the generated images as the network was training, every 10 epochs.",
"_____no_output_____"
]
],
[
[
"rows = 10 # split epochs into 10, so 100/10 = every 10 epochs\ncols = 6\nfig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)\n\nfor sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):\n for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):\n img = img.detach()\n ax.imshow(img.reshape((28,28)), cmap='Greys_r')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)",
"_____no_output_____"
]
],
[
[
"It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.",
"_____no_output_____"
],
[
"## Sampling from the generator\n\nWe can also get completely new images from the generator by using the checkpoint we saved after training. **We just need to pass in a new latent vector $z$ and we'll get new samples**!",
"_____no_output_____"
]
],
[
[
"# randomly generated, new latent vectors\nsample_size=16\nrand_z = np.random.uniform(-1, 1, size=(sample_size, z_size))\nrand_z = torch.from_numpy(rand_z).float()\n\nG.eval() # eval mode\n# generated samples\nrand_images = G(rand_z)\n\n# 0 indicates the first set of samples in the passed in list\n# and we only have one batch of samples, here\nview_samples(0, [rand_images])",
"C:\\Users\\ngrec\\anaconda3\\envs\\udl\\lib\\site-packages\\torch\\nn\\functional.py:1628: UserWarning: nn.functional.tanh is deprecated. Use torch.tanh instead.\n warnings.warn(\"nn.functional.tanh is deprecated. Use torch.tanh instead.\")\n"
]
]
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] | [
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.